HF-hub - Share and discover more about AI with social posts from the community.huggingface/OpenAi
Share and discover more about AI with social posts from the community.huggingface/OpenAi
Alright Ya'll

I know it's a Saturday, but I decided to release my first Flux Dev Lora.

A retrain of my "Frosting Lane" model and I am sure the styles will just keep improving.

Have fun! Link Below - Thanks again to @ostris for the trainer and Black Forest Labs for the awesome model!

alvdansen/frosting_lane_flux
NEW MODEL + DATASET! :)

Check out Enigma, our new code-instruct model:
- trained on synthetic code-instruct data created with Llama 3.1 405b
- high quality code-instruct within the Llama 3.1 Instruct format

The model:
ValiantLabs/Llama3.1-8B-Enigma

The dataset:
sequelbox/Tachibana


Enjoy! We've got more new datasets and models to follow soon.
I've been testing a new open-source AI image generation model, FLUX.1 [dev], today. I'm quite impressed! This could mark a goodbye to Midjourney and Stable Diffusion. The images were done locally on an RTX 3090, for around 40 seconds per image.
Thank you Robin&Black Forest Labs!
Databricks ❤️ Hugging Face: up to 40% faster training and tuning of Large Language Models
Generative AI has been taking the world by storm. As the data and AI company, we have been on this journey with the release of the open source large language model Dolly, as well as the internally crowdsourced dataset licensed for research and commercial use that we used to fine-tune it, the databricks-dolly-15k. Both the model and dataset are available on Hugging Face. We’ve learned a lot throughout this process, and today we’re excited to announce our first of many official commits to the Hugging Face codebase that allows users to easily create a Hugging Face Dataset from an Apache Spark dataframe.
https://huggingface.co/blog/databricks-case-study Databricks ❤️ Hugging Face: up to 40% faster training and tuning of Large Language Models
Rocket Money x Hugging Face: Scaling Volatile ML Models in Production
"We discovered that they were not just service providers, but partners who were invested in our goals and outcomes” - Nicolas Kuzak, Senior ML Engineer at Rocket Money.
Scaling and Maintaining ML Models in Production Without an MLOps Team
We created Rocket Money (a personal finance app formerly known as Truebill) to help users improve their financial wellbeing. Users link their bank accounts to the app which then classifies and categorizes their transactions, identifying recurring patterns to provide a consolidated, comprehensive view of their personal financial life. A critical stage of transaction processing is detecting known merchants and services, some of which Rocket Money can cancel and negotiate the cost of for members. This detection starts with the transformation of short, often truncated and cryptically formatted transaction strings into classes we can use to enrich our product experience.
https://huggingface.co/blog/rocketmoney-case-study Rocket Money x Hugging Face: Scaling Volatile ML Models in Production​
Fetch Consolidates AI Tools and Saves 30% Development Time with Hugging Face on AWS
xecutive Summary
Fetch, a consumer rewards company, developed about 15 different AI tools to help it receive, route, read, process, analyze, and store receipts uploaded by users. The company has more than 18 million active monthly users for its shopping rewards app. Fetch wanted to rebuild its AI-powered platform and, using Amazon Web Services (AWS) and with the support of AWS Partner Hugging Face, moved from using third-party applications to developing its own tools to gain better insights about customers. Consumers scan receipts —or forward electronic receipts— to receive rewards points for their purchases. Businesses can offer special rewards to users, such as extra points for purchasing a particular product. The company can now process more than 11 million receipts per day faster and gets better data.
https://huggingface.co/blog/fetch-eap-case-study Fetch Consolidates AI Tools and Saves 30% Development Time with Hugging Face on AWS
Ryght’s Journey to Empower Healthcare and Life Sciences with Expert Support from Hugging Face
Who is Ryght?
Ryght is building an enterprise-grade generative AI platform tailored for the healthcare and life sciences sectors. Today is their official launch of Ryght Preview, now publicly available for all.

Life science companies are amassing a wealth of data from diverse sources (lab data, EMR, genomics, claims, pharmacy, clinical, etc.), but analysis of that data is archaic, requiring large teams for everything from simple queries to developing useful ML models. There is huge demand for actionable knowledge to drive drug development, clinical trials, and commercial activity, and the rise of precision medicine is only accelerating this demand.

Ryght Laptophttps://huggingface.co/blog/ryght-case-study
Going multimodal: How Prezi is leveraging the Hub and the Expert Support Program to accelerate their ML roadmap
Everybody knows that a great visual is worth a thousand words. The team at Prezi, a visual communications software company, is putting this insight into practice with their Prezi presentations that combine images and text in highly dynamic presentations.
Prezi has joined the Hugging Face Expert Support Program to fully leverage modern machine learning's potential. Over the past months, Hugging Face has supported Prezi in integrating smaller, more efficient open-source models into their ML workflows. This cooperation started at a perfect time, as multimodal models are becoming increasingly capable.https://huggingface.co/blog/prezi-case-study Going multimodal: How Prezi is leveraging the Hub and the Expert Support Program to accelerate their ML roadmap
XLSCOUT Unveils ParaEmbed 2.0: a Powerful Embedding Model Tailored for Patents and IP with Expert Support from Hugging Face
XLSCOUT, a Toronto-based leader in the use of AI in intellectual property (IP), has developed a powerful proprietary embedding model called ParaEmbed 2.0 stemming from an ambitious collaboration with Hugging Face’s Expert Support Program. The collaboration focuses on applying state-of-the-art AI technologies and open-source models to enhance the understanding and analysis of complex patent documents including patent-specific terminology, context, and relationships. This allows XLSCOUT’s products to offer the best performance for drafting patent applications, patent invalidation searches, and ensuring ideas are novel compared to previously available patents and literature.https://huggingface.co/blog/xlscout-case-study XLSCOUT Unveils ParaEmbed 2.0: a Powerful Embedding Model Tailored for Patents and IP with Expert Support from Hugging Face
A SmolLM - blazingly fast and remarkably powerful
TL;DR
This blog post introduces SmolLM, a family of state-of-the-art small models with 135M, 360M, and 1.7B parameters, trained on a new high-quality dataset. It covers data curation, model evaluation, and usage.

Introduction
There is increasing interest in small language models that can operate on local devices. This trend involves techniques such as distillation or quantization to compress large models, as well as training small models from scratch on large datasets. These approaches enable novel applications while dramatically reducing inference costs and improving user privacy.

Microsoft's Phi series, Alibaba's Qwen2 (less than 2B), and Meta's MobileLLM demonstrate that small models can achieve impressive results when designed and trained thoughtfully. However, most of the details about the data curation and training of these models are not publicly available.https://huggingface.co/blog/smollm SmolLM - blazingly fast and remarkably powerful