HF-hub - Share and discover more about AI with social posts from the community.huggingface/OpenAi
Share and discover more about AI with social posts from the community.huggingface/OpenAi
Question about LightEval 🤗:

I've been searching for an LLM evaluation suite that can, out-of-the-box, compare the outputs of a model(s) without any enhancements vs. the same model with better prompt engineering, vs. the same model with RAG vs. the same model with fine-tuning.

I unfortunately have not found a tool that fits my exact description, but of course I ran into LightEval.

A huge pain-point of building large-scale projects that use LLMs is that prior to building an MVP, it is difficult to evaluate whether better prompt engineering, or RAG, or fine-tuning, or some combination of all is needed for satisfactory LLM output in terms of the project's given use case.

Time and resources is then wasted R&D'ing exactly what LLM enhancements are needed.

I believe an out-of-the-box solution to compare models w/ or w/out the aforementioned LLM enhancements could help teams of any size better decide what LLM enhancements are needed prior to building.

I wanted to know if the LightEval team or Hugging Face in general is thinking about such a tool.
Here's my favorite piece of the summer bias detection research project (paper coming in Sept). We trained BERT for token classification (multi-label), to identify:
- Generalizations
- Unfairness
- Stereotypes

HF Space:
maximuspowers/bias-detection-ner

Article on Training: https://huggingface.co/blog/maximuspowers/bias-entity-recognition

Pls reach out with ideas!! Lot's more info coming soon, our research group has workshops and a hackathon planned for launching this open source project. Thanks Social Bias NER with BERT
Introducing Voicee, A superfast voice fast assistant.
KingNish/Voicee

It achieved latency <500 ms.
While its average latency is 700ms.
It works best in Google Chrome.
Please try and give your feedbacks.
Thank you. 🤗
Ghost 8B Beta 1608: Empowering Your AI Assistant
📦 Unlock the Power of Ghost 8B Beta 1608: Build Your Personal AI Companion
Ghost 8B Beta 1608 empowers you to create a safe and multilingual AI assistant tailored to your needs, directly on your personal computer. 🧑‍💻 Leverage AI's capabilities within your own space! 🚀 Ghost 8B Beta 1608 is ready to become your AI companion.
~
📦 개인용 AI 보조 도구로 Ghost 8B Beta 1608를 활용하세요!
Ghost 8B Beta 1608, AI의 힘을 활용하여 안전하고 개인화된 언어 지원을 제공하는 AI 보조 도구를 직접 구축할 수 있습니다. 🧑‍💻 개인 컴퓨터에서 AI의 혜택을 누리세요! 🚀 Ghost 8B Beta 1608는 당신의 AI 파트너가 될 준비가 되어 있습니다.
lamhieu/ghost-8b-beta-8k

ghost-x/ghost-8b-beta-668ead6179f93be717db4542
Looking for Generative AI trainer/speaker for AI accelerator program (Virtual/Online sessions).

To get more context about the program, please visit the program landing page: https://llamadesigndrive.com
a new shape-optimized SigLIP just dropped 👀
google/siglip-so400m-patch14-224
🌟 Liger Kernel: Efficient Triton Kernels for LLM Training

LIGER "is a [Hugging Face-compatible] collection of Triton kernels designed specifically for LLM training. It can effectively increase multi-GPU training throughput by 20% and reduces memory usage by 60%."

GitHub: https://github.com/linkedin/Liger-Kernel GitHub - linkedin/Liger-Kernel: Efficient Triton Kernels for LLM Training
Neural Network  (1 Byte explainer for everybody)

Just like our brain, a Neural Network is made up of interconnected "neurons". These neurons work together by learning from (input) data and getting better at tasks (in the hidden layer) to give (output) predictions or decisions.
𝗔𝗜𝟮𝟭 𝗶𝘁𝗲𝗿𝗮𝘁𝗲𝘀 𝘄𝗶𝘁𝗵 𝗻𝗲𝘄 𝗝𝗮𝗺𝗯𝗮 𝟭.𝟱 𝗿𝗲𝗹𝗲𝗮𝘀𝗲: 𝗡𝗲𝘄 𝘀𝘁𝗮𝗻𝗱𝗮𝗿𝗱 𝗳𝗼𝗿 𝗹𝗼𝗻𝗴-𝗰𝗼𝗻𝘁𝗲𝘅𝘁 𝘂𝘀𝗲-𝗰𝗮𝘀𝗲𝘀!🏅

@ai21labs used a different architecture to beat the status-quo Transformers models: Jamba architecture combines classic Transformers layers with the new Mamba layers, for which the complexity is a linear (instead of quadratic) function of the context length.

What does this imply?

➡️ Jamba models are much more efficient for long contexts: faster (up to 2.5x faster for long context), takes less memory, and also performs better to recall everything in the prompt.

That means it’s a new go-to model for RAG or agentic applications!

And the performance is not too shabby: Jamba 1.5 models are comparable in perf to similar-sized Llama-3.1 models! The largest model even outperforms Llama-3.1 405B on Arena-Hard.

✌️ Comes in 2 sizes: Mini (12B active/52B) and Large (94B active/399B)
📏 Both deliver 256k context length, for low memory: Jamba-1.5 mini fits 140k context length on one single A100.
⚙️ New quanttization method: Experts Int8 quantizes only the weights parts of the MoE layers, which account for 85% of weights
🤖 Natively supports JSON format generation & function calling.
🔓 Permissive license *if your org makes <$50M revenue*

Available on the Hub 👉
ai21labs/jamba-15-66c44befa474a917fcf55251

Read their release blog post 👉 https://www.ai21.com/blog/announcing-jamba-model-family The Jamba 1.5 Open Model Family: The Most Powerful and Efficient Long Context Models
The latest timm validation & test set results are now viewable by a leaderboard space:
timm/leaderboard


As of yesterday, I updated all of the results for ImageNet , ImageNet-ReaL, ImageNet-V2, ImageNet-R, ImageNet-A, and Sketch sets. The csv files can be found in the GH repo https://github.com/huggingface/pytorch-image-models/tree/main/results

Unfortunately the latest benchmark csv files are not yet up to date, there are some gaps in dataset results vs throughput/flop numbers impact the plots.

h/t to @MohamedRashad for making the first timm leaderboard. pytorch-image-models/results at main · huggingface/pytorch-image-models
Huge updates and improvements for FLUX LoRA training : https://www.patreon.com/posts/kohya-flux-lora-110293257

10 GB, 16 GB, 24 GB and 48 GB GPU configs added - 10 GB config is like 3x to 5x slower sadly

Massed Compute, RunPod and Windows Kohya SS GUI LoRA installers added to the zip file

Also right now testing new 16 GB FLUX LoRA training config and new way of regularization images. Moreover testing Apply T5 Attention Mask too. Lets see if Kohya FLUX LoRA workflow will become even better or not

Also massive grids comparisons shared here : https://www.reddit.com/r/StableDiffusion/comments/1eyj4b8/kohya_ss_gui_very_easy_f Kohya FLUX LoRA and Fine Tuning Training Full Tutorial For Local Windows and Cloud RunPod and Massed Compute for Research & Development…
Shoutout to everyone who participated in BigScience! Doesn't get enough credit but IMO paved the way for open-source LLMs!

BLOOM: A 176B-Parameter Open-Access Multilingual Language Model (2211.05100)

bigscience/bloom

bigscience/bloomz
https://huggingface.co/bigscience/bloom bigscience/bloom · Hugging Face
ACL 2024: The Missing Papers

Apparently, some papers from the ACL 2024 are still not listed in the ACL Anthology. While this issue will hopefully be fixed soon, we should give those papers additional spotlight.

Some of my favorites:

1. Dolma is an English corpus that encompasses 3 trillion tokens. Additionally, it is accompanied by an exceptional software package that consdierably advances the state-of-the-art in preparing data for LLM pretraining. (Source: I am currently using Dolma.)
Dolma: an Open Corpus of Three Trillion Tokens for Language Model Pretraining Research (2402.00159)


2. In the paper "Same Task, More Tokens: the Impact of Input Length on
the Reasoning Performance of Large Language Models", the authors show how extending the context length impacts an LLM's reasoning performance. I asked myself a similar question a few months ago, and therefore this paper is highly interesting to me.
Same Task, More Tokens: the Impact of Input Length on the Reasoning Performance of Large Language Models (2402.14848)


This was brought to my attention through a Linkedin post by @ShayeghB, who is also affected:
Ensemble-Based Unsupervised Discontinuous Constituency Parsing by Tree Averaging (2403.00143)


View all the missing papers here:
https://theshayegh.github.io/ACL2024MissingPapers/
I can't believe this... Phi-3.5-mini (3.8B) running in-browser at ~90 tokens/second on WebGPU w/ Transformers.js and ONNX Runtime Web! 🤯 Since everything runs 100% locally, no messages are sent to a server — a huge win for privacy!
- 🤗 Demo:
webml-community/phi-3.5-webgpu

- 🧑‍💻 Source code: https://github.com/huggingface/transformers.js-examples/tree/main/phi-3.5-webgpu transformers.js-examples/phi-3.5-webgpu at main · huggingface/transformers.js-examples
🌐 Check out the new dataset sourced from Fishki.net, one of the popular entertainment and news portals in the Russian Internet, known for its diverse content including humor, interesting facts, and viral stories -
nyuuzyou/fishkinet-posts
.

📊 Dataset highlights:
- 369,180 posts
- Includes original posts with titles, content, images, and metadata
- Each entry contains URL, title, author, date, tags, content, and image URLs
- Primarily in Russian language
- Covers a wide range of topics in entertainment, news, and social media content
- Spans nearly two decades of posts, likely from early 2000s to 2024
- Dedicated to public domain under Creative Commons Zero (CC0) license
Alan Turing's mind-bender: "Can machines think?" in its glorified form. This 74yr old paper laid the foundation for how we think about AI and machine intelligence today. The level of detail, clarity and foresight is just phenomenal - he was way ahead of his time 🧠🤖

Original copy: https://archive.org/details/MIND--COMPUTING-MACHINERY-AND-INTELLIGENCE
Introducing HelpingAI2-9B, an emotionally intelligent LLM.
Model Link :
OEvortex/HelpingAI2-9B

Demo Link:
Abhaykoul/HelpingAI2


This model is part of the innovative HelpingAI series and it stands out for its ability to engage users with emotional understanding.

Key Features:
-----------------

* It gets 95.89 score on EQ Bench greather than all top notch LLMs, reflecting advanced emotional recognition.
* It gives responses in empathetic and supportive manner.

Must try our demo:
Abhaykoul/HelpingAI2