HF-hub - Share and discover more about AI with social posts from the community.huggingface/OpenAi
Share and discover more about AI with social posts from the community.huggingface/OpenAi
Huge updates and improvements for FLUX LoRA training : https://www.patreon.com/posts/kohya-flux-lora-110293257

10 GB, 16 GB, 24 GB and 48 GB GPU configs added - 10 GB config is like 3x to 5x slower sadly

Massed Compute, RunPod and Windows Kohya SS GUI LoRA installers added to the zip file

Also right now testing new 16 GB FLUX LoRA training config and new way of regularization images. Moreover testing Apply T5 Attention Mask too. Lets see if Kohya FLUX LoRA workflow will become even better or not

Also massive grids comparisons shared here : https://www.reddit.com/r/StableDiffusion/comments/1eyj4b8/kohya_ss_gui_very_easy_f Kohya FLUX LoRA and Fine Tuning Training Full Tutorial For Local Windows and Cloud RunPod and Massed Compute for Research & Development…
Shoutout to everyone who participated in BigScience! Doesn't get enough credit but IMO paved the way for open-source LLMs!

BLOOM: A 176B-Parameter Open-Access Multilingual Language Model (2211.05100)

bigscience/bloom

bigscience/bloomz
https://huggingface.co/bigscience/bloom bigscience/bloom · Hugging Face
ACL 2024: The Missing Papers

Apparently, some papers from the ACL 2024 are still not listed in the ACL Anthology. While this issue will hopefully be fixed soon, we should give those papers additional spotlight.

Some of my favorites:

1. Dolma is an English corpus that encompasses 3 trillion tokens. Additionally, it is accompanied by an exceptional software package that consdierably advances the state-of-the-art in preparing data for LLM pretraining. (Source: I am currently using Dolma.)
Dolma: an Open Corpus of Three Trillion Tokens for Language Model Pretraining Research (2402.00159)


2. In the paper "Same Task, More Tokens: the Impact of Input Length on
the Reasoning Performance of Large Language Models", the authors show how extending the context length impacts an LLM's reasoning performance. I asked myself a similar question a few months ago, and therefore this paper is highly interesting to me.
Same Task, More Tokens: the Impact of Input Length on the Reasoning Performance of Large Language Models (2402.14848)


This was brought to my attention through a Linkedin post by @ShayeghB, who is also affected:
Ensemble-Based Unsupervised Discontinuous Constituency Parsing by Tree Averaging (2403.00143)


View all the missing papers here:
https://theshayegh.github.io/ACL2024MissingPapers/
I can't believe this... Phi-3.5-mini (3.8B) running in-browser at ~90 tokens/second on WebGPU w/ Transformers.js and ONNX Runtime Web! 🤯 Since everything runs 100% locally, no messages are sent to a server — a huge win for privacy!
- 🤗 Demo:
webml-community/phi-3.5-webgpu

- 🧑‍💻 Source code: https://github.com/huggingface/transformers.js-examples/tree/main/phi-3.5-webgpu transformers.js-examples/phi-3.5-webgpu at main · huggingface/transformers.js-examples
🌐 Check out the new dataset sourced from Fishki.net, one of the popular entertainment and news portals in the Russian Internet, known for its diverse content including humor, interesting facts, and viral stories -
nyuuzyou/fishkinet-posts
.

📊 Dataset highlights:
- 369,180 posts
- Includes original posts with titles, content, images, and metadata
- Each entry contains URL, title, author, date, tags, content, and image URLs
- Primarily in Russian language
- Covers a wide range of topics in entertainment, news, and social media content
- Spans nearly two decades of posts, likely from early 2000s to 2024
- Dedicated to public domain under Creative Commons Zero (CC0) license
Alan Turing's mind-bender: "Can machines think?" in its glorified form. This 74yr old paper laid the foundation for how we think about AI and machine intelligence today. The level of detail, clarity and foresight is just phenomenal - he was way ahead of his time 🧠🤖

Original copy: https://archive.org/details/MIND--COMPUTING-MACHINERY-AND-INTELLIGENCE
Introducing HelpingAI2-9B, an emotionally intelligent LLM.
Model Link :
OEvortex/HelpingAI2-9B

Demo Link:
Abhaykoul/HelpingAI2


This model is part of the innovative HelpingAI series and it stands out for its ability to engage users with emotional understanding.

Key Features:
-----------------

* It gets 95.89 score on EQ Bench greather than all top notch LLMs, reflecting advanced emotional recognition.
* It gives responses in empathetic and supportive manner.

Must try our demo:
Abhaykoul/HelpingAI2
NEW math-instruct model + dataset!

ValiantLabs/Llama3.1-8B-Cobalt
is our new math-instruct model.
Trained using a synthetic math-instruct dataset generated with Llama 3.1 405b. Find the dataset here:
sequelbox/Polytope


More to come soon :)
Supercool Weekend Read🤖
Nvidia researchers achieved SOTA LLM compression metrics using pruning and knowledge distillation techniques.

Details on Techniques (Simplified):
They started off with a large pre-trained language model (15B params), then:

1. Estimated the importance of different parts of the model (neurons, attention heads, layers) using activation-based metrics on a small calibration dataset.
What Happens When RAG System Become Fully Vision-Language Model-Based?
HF Demo:
bokesyo/MiniCPMV-RAG-PDFQA

Multimodal Dense Retriever:
RhapsodyAI/minicpm-visual-embedding-v0

Generation Model:
openbmb/MiniCPM-V-2_6

Github: https://github.com/RhapsodyAILab/MiniCPM-V-Embedding-v0-Train

The Vision-Language Model Dense Retriever MiniCPM-Visual-Embedding-v0 reads PDFs directly -- no OCR required. With strong OCR capability and visual understanding capability, it generates multimodal dense representations, allowing you to build and search through your personal library with ease.

Ask a question, it retrieves the most relevant pages. Then, MiniCPM-V-2.6 provides answers based on the retrieved pages, with strong multi-image understanding capabilities.

Whether you’re working with a visually-intensive or text-oriented PDF, it helps you quickly find the information you need. You can also build a personal library with it.

It operates just like a human: reading, storing, retrieving, and answering with full visual comprehension.

Currently, the online demo supports PDFs with up to 50 pages due to GPU time limits. For longer PDFs or entire books, you can deploy it on your own machine.
https://cdn-uploads.huggingface.co/production/uploads/6415818a986557e8cac252bf/sjtQD7CFgox46h9EVHCG_.png GitHub - RhapsodyAILab/MiniCPM-V-Embedding
So turns out I've been spreading a bit of misinformation when it comes to imatrix in llama.cpp

It starts true; imatrix runs the model against a corpus of text and tracks the activation of weights to determine which are most important

However what the quantization then does with that information is where I was wrong.

I think I made the accidental connection between imatrix and exllamav2's measuring, where ExLlamaV2 decides how many bits to assign to which weight depending on the goal BPW

Instead, what llama.cpp with imatrix does is it attempts to select a scale for a quantization block that most accurately returns the important weights to their original values, ie minimizing the dequantization error based on the importance of activations

The mildly surprising part is that it actually just does a relatively brute force search, it picks a bunch of scales and tries each and sees which one results in the minimum error for weights deemed important in the group

But yeah, turns out, the quantization scheme is always the same, it's just that the scaling has a bit more logic to it when you use imatrix

Huge shoutout to @compilade for helping me wrap my head around it - feel free to add/correct as well if I've messed something up
How good are you at spotting AI-generated images?

Find out by playing Fake Insects 🐞 a Game where you need to identify which insects are fake (AI generated). Good luck & share your best score in the comments!

victor/fake-insects
I'm excited to share a really cool milestone in my AI/LLM journey.

Brief backstory: Before diving into AI, I spent over a decade working in ecological fields such as the conservation corps, biodynamic farming, and natural habitat restoration. This background instilled in me a deep concern about the environmental impact of scaling AI without sustainable practices.

Driven by this concern, I've spent months planning and experimenting to make my AI work more eco-friendly. I'm thrilled to announce that I've successfully transitioned my entire operation to run on 100% sustainable solar power!
🚀 We’re excited to launch Ghost 8B Beta (1608), a top-performing language model with unmatched multilingual support and cost efficiency.

Key Highlights:
- Superior Performance: Outperforms Llama 3.1 8B Instruct, GPT-3.5 Turbo, Claude 3 Opus, GPT-4, and more in winrate scores.
- Expanded Language Support: Now supports 16 languages, including English, Vietnamese, Spanish, Chinese, and more.
- Enhanced Capabilities: Improved math, reasoning, and instruction-following for better task handling.
Put together a small repo showing how to go from making your own fine-tuning dataset w/ services like Groq & Together to publishing that model on ollama.

In my case I fine-tuned SmolLM-360M to be a better assistant for my Pi-Card (previous post) project.

Check it out!
https://github.com/nkasmanoff/ft-flow GitHub - nkasmanoff/ft-flow: Synthetic data to inference for LLM finetuning
ResShift 1-Click Windows, RunPod, Massed Compute, Kaggle Installers with Amazing Gradio APP and Batch Image Processing. ResShift is Efficient Diffusion Model for Image Super-resolution by Residual Shifting (NeurIPS 2023, Spotlight).


Official Repo : https://github.com/zsyOAOA/ResShift

I have developed a very advanced Gradio APP. GitHub - zsyOAOA/ResShift: ResShift: Efficient Diffusion Model for Image Super-resolution by Residual Shifting (NeurIPS@2023 Spotlight…