HF-hub - Share and discover more about AI with social posts from the community.huggingface/OpenAi
Share and discover more about AI with social posts from the community.huggingface/OpenAi
Block Sparse Matrices for Smaller and Faster Language Models
Saving space and time, one zero at a time
In previous blog posts we introduced sparse matrices and what they could do to improve neural networks.

The basic assumption is that full dense layers are often overkill and can be pruned without a significant loss in precision. In some cases sparse linear layers can even improve precision or/and generalization.

The main issue is that currently available code that supports sparse algebra computation is severely lacking efficiency. We are also still waiting for official PyTorch support.

That's why we ran out of patience and took some time this summer to address this "lacuna". Today, we are excited to release the extension pytorch_block_sparse.

By itself, or even better combined with other methods like distillation and quantization, this library enables networks which are both smaller and faster, something Hugging Face considers crucial to let anybody use neural networks in production at low cost, and to improve the experience for the end user.
Memory-efficient Diffusion Transformers with Quanto and Diffusers
Over the past few months, we have seen an emergence in the use of Transformer-based diffusion backbones for high-resolution text-to-image (T2I) generation. These models use the transformer architecture as the building block for the diffusion process, instead of the UNet architecture that was prevalent in many of the initial diffusion models. Thanks to the nature of Transformers, these backbones show good scalability, with models ranging from 0.6B to 8B parameters.

As models become larger, memory requirements increase. The problem intensifies because a diffusion pipeline usually consists of several components: a text encoder, a diffusion backbone, and an image decoder. Furthermore, modern diffusion pipelines use multiple text encoders – for example, there are three in the case of Stable Diffusion 3. It takes 18.765 GB of GPU memory to run SD3 inference using FP16 precision.

These high memory requirements can make it difficult to use these models with consumer GPUs, slowing adoption and making experimentation harder. In this post, we show how to improve the memory efficiency of Transformer-based diffusion pipelines by leveraging Quanto's quantization utilities from the Diffusers library.
Quanto: a PyTorch quantization backend for Optimum
Quantization is a technique to reduce the computational and memory costs of evaluating Deep Learning Models by representing their weights and activations with low-precision data types like 8-bit integer (int8) instead of the usual 32-bit floating point (float32).

Reducing the number of bits means the resulting model requires less memory storage, which is crucial for deploying Large Language Models on consumer devices. It also enables specific optimizations for lower bitwidth datatypes, such as int8 or float8 matrix multiplications on CUDA devices.

Many open-source libraries are available to quantize pytorch Deep Learning Models, each providing very powerful features, yet often restricted to specific model configurations and devices.

Also, although they are based on the same design principles, they are unfortunately often incompatible with one another.

Today, we are excited to introduce quanto, a PyTorch quantization backend for Optimum.
Fine-tuning Llama 2 70B using PyTorch FSDP
Introduction
In this blog post, we will look at how to fine-tune Llama 2 70B using PyTorch FSDP and related best practices. We will be leveraging Hugging Face Transformers, Accelerate and TRL. We will also learn how to use Accelerate with SLURM.

Fully Sharded Data Parallelism (FSDP) is a paradigm in which the optimizer states, gradients and parameters are sharded across devices. During the forward pass, each FSDP unit performs an all-gather operation to get the complete weights, computation is performed followed by discarding the shards from other devices. After the forward pass, the loss is computed followed by the backward pass. In the backward pass, each FSDP unit performs an all-gather operation to get the complete weights, with computation performed to get the local gradients. These local gradients are averaged and sharded across the devices via a reduce-scatter operation so that each device can update the parameters of its shard.
Retrieval Augmented Generation with Huggingface Transformers and Ray
A guest blog post by Amog Kamsetty from the Anyscale team
Huggingface Transformers recently added the Retrieval Augmented Generation (RAG) model, a new NLP architecture that leverages external documents (like Wikipedia) to augment its knowledge and achieve state of the art results on knowledge-intensive tasks. In this blog post, we introduce the integration of Ray, a library for building scalable applications, into the RAG contextual document retrieval mechanism. This speeds up retrieval calls by 2x and improves the scalability of RAG distributed fine-tuning.

What is Retrieval Augmented Generation (RAG)?
Hyperparameter Search with Transformers and Ray Tune
A guest blog post by Richard Liaw from the Anyscale team
With cutting edge research implementations, thousands of trained models easily accessible, the Hugging Face transformers library has become critical to the success and growth of natural language processing today.

For any machine learning model to achieve good performance, users often need to implement some form of parameter tuning. Yet, nearly everyone (1, 2) either ends up disregarding hyperparameter tuning or opting to do a simplistic grid search with a small search space.

However, simple experiments are able to show the benefit of using an advanced tuning technique. Below is a recent experiment run on a BERT model from Hugging Face transformers on the RTE dataset. Genetic optimization techniques like PBT can provide large performance improvements compared to standard hyperparameter optimization techniques.
Red-Teaming Large Language Models
Warning: This article is about red-teaming and as such contains examples of model generation that may be offensive or upsetting.

Large language models (LLMs) trained on an enormous amount of text data are very good at generating realistic text. However, these models often exhibit undesirable behaviors like revealing personal information (such as social security numbers) and generating misinformation, bias, hatefulness, or toxic content. For example, earlier versions of GPT3 were known to exhibit sexist behaviors (see below) and biases against Muslims,
AutoGen from @Microsoft is crazy! 🚀 It's an open-source framework that allows LLM agents to chat with each other to solve your tasks. 🤖💬

They use the Assistant-Agent and User-Proxy-Agent framework! 🛠

As the name suggests, the Assistant-Agent does the work, and the User-Proxy-Agent behaves like a human, guiding the Assistant-Agent and double-checking its work! 🧑‍💻

Both Assistant-Agent and User-Proxy-Agent can be the same or different LLMs. 🤔🔄

AutoGen is an open-source programming framework for building AI agents and facilitating cooperation among multiple agents to solve tasks. 🌟

This is truly amazing for building agentic AI quickly! 🚀

GitHub: https://github.com/microsoft/autogen 🔗 GitHub - microsoft/autogen: A programming framework for agentic AI 🤖
How To Use On Windows
Just extract files into like c:/BiRefNet_v1

Double click Windows_Install.bat file and it will generate a isolated virtual environment and install requirements

It will automatically download models into your Hugging Face cache (best model under 1 GB)

Then start and use the Gradio APP with Windows_Start_App.bat

Cloud How To Use
Massed Compute, RunPod has instructions txt files. Follow them

Kaggle has all the instructions 1 by 1

On Kaggle set resolution 1024x1024 or you will get out of memory error
BiRefNet State Of The Art Newest Very Best Background Batch Remover APP

Official repo : https://github.com/ZhengPeng7/BiRefNet

Download APP and installers from : https://www.patreon.com/posts/109913645

Hugging Face Demo :
ZhengPeng7/BiRefNet_demo


I have developed a very advanced Gradio APP for this with full proper file saving and batch processing. Also my version removes BG and saves as transparent background. GitHub - ZhengPeng7/BiRefNet: [CAAI AIR'24] Bilateral Reference for High-Resolution Dichotomous Image Segmentation
How to use SD3 Mdeium with SwarmUI • Run SD3 Medium Locally With Swarm UI
Download SDXL Union ControlNet (rename it any way you want. I just called it union) https://huggingface.co/xinsir/control...
Comfyui manager https://github.com/ltdrdata/ComfyUI-M...
The Reformer - Pushing the limits of language modeling
Open In Colab

How the Reformer uses less than 8GB of RAM to train on sequences of half a million tokens
The Reformer model as introduced by Kitaev, Kaiser et al. (2020) is one of the most memory-efficient transformer models for long sequence modeling as of today.

Recently, long sequence modeling has experienced a surge of interest as can be seen by the many submissions from this year alone - Beltagy et al. (2020), Roy et al. (2020), Tay et al., Wang et al. to name a few. The motivation behind long sequence modeling is that many tasks in NLP, e.g. summarization, question answering, require the model to process longer input sequences than models, such as BERT, are able to handle. In tasks that require the model to process a large input sequence, long sequence models do not have to cut the input sequence to avoid memory overflow and thus have been shown to outperform standard "BERT"-like models cf. Beltagy et al. (2020).
https://github.com/huggingface/blog/blob/main/reformer.md blog/reformer.md at main · huggingface/blog
Introducing Storage Regions on the Hub
As part of our Enterprise Hub plan, we recently released support for Storage Regions.

Regions let you decide where your org's models and datasets will be stored. This has two main benefits, which we'll briefly go over in this blog post:

Regulatory and legal compliance, and more generally, better digital sovereignty
Performance (improved download and upload speeds and latency)
Currently we support the following regions:

US 🇺🇸
EU 🇪🇺
coming soon: Asia-Pacific 🌏
But first, let's see how to setup this feature in your organization's settings 🔥
Creating open machine learning datasets? Share them on the Hugging Face Hub!
Who is this blog post for?
Are you a researcher doing data-intensive research or using machine learning as a research tool? As part of this research, you have likely created datasets for training and evaluating machine learning models, and like many researchers, you may be sharing these datasets via Google Drive, OneDrive, or your own personal server. In this post, we’ll outline why you might want to consider sharing these datasets on the Hugging Face Hub instead.

This post outlines:

Why researchers should openly share their data (feel free to skip this section if you are already convinced about this!)
What the Hugging Face Hub offers for researchers who want to share their datasets.
Resources for getting started with sharing your datasets on the Hugging Face Hub.
https://github.com/huggingface/blog/blob/main/researcher-dataset-sharing.md blog/researcher-dataset-sharing.md at main · huggingface/blog
Illustrating Reinforcement Learning from Human Feedback (RLHF)
Language models have shown impressive capabilities in the past few years by generating diverse and compelling text from human input prompts. However, what makes a "good" text is inherently hard to define as it is subjective and context dependent. There are many applications such as writing stories where you want creativity, pieces of informative text which should be truthful, or code snippets that we want to be executable.
https://github.com/huggingface/blog/blob/main/rlhf.md
Rocket Money x Hugging Face: Scaling Volatile ML Models in Production
"We discovered that they were not just service providers, but partners who were invested in our goals and outcomes” - Nicolas Kuzak, Senior ML Engineer at Rocket Money.
Scaling and Maintaining ML Models in Production Without an MLOps Team
We created Rocket Money (a personal finance app formerly known as Truebill) to help users improve their financial wellbeing. Users link their bank accounts to the app which then classifies and categorizes their transactions, identifying recurring patterns to provide a consolidated, comprehensive view of their personal financial life. A critical stage of transaction processing is detecting known merchants and services, some of which Rocket Money can cancel and negotiate the cost of for members. This detection starts with the transformation of short, often truncated and cryptically formatted transaction strings into classes we can use to enrich our product experience.
Deploy MusicGen in no time with Inference Endpoints
MusicGen is a powerful music generation model that takes in text prompt and an optional melody to output music. This blog post will guide you through generating music with MusicGen using Inference Endpoints.

Inference Endpoints allow us to write custom inference functions called custom handlers. These are particularly useful when a model is not supported out-of-the-box by the transformers high-level abstraction pipeline.

transformers pipelines offer powerful abstractions to run inference with transformers-based models. Inference Endpoints leverage the pipeline API to easily deploy models with only a few clicks. However, Inference Endpoints can also be used to deploy models that don't have a pipeline, or even non-transformer models! This is achieved using a custom inference function that we call a custom handler.

Let's demonstrate this process using MusicGen as an example. To implement a custom handler function for MusicGen and deploy it, we will need to:

Duplicate the MusicGen repository we want to serve,
Write a custom handler in handler.py and any dependencies in requirements.txt and add them to the duplicated repository,
Create Inference Endpoint for that repository.
Or simply use the final result and deploy our custom MusicGen model repo, where we just followed the steps above :)
Introducing RWKV - An RNN with the advantages of a transformer
ChatGPT and chatbot-powered applications have captured significant attention in the Natural Language Processing (NLP) domain. The community is constantly seeking strong, reliable and open-source models for their applications and use cases. The rise of these powerful models stems from the democratization and widespread adoption of transformer-based models, first introduced by Vaswani et al. in 2017. These models significantly outperformed previous SoTA NLP models based on Recurrent Neural Networks (RNNs), which were considered dead after that paper. Through this blogpost, we will introduce the integration of a new architecture, RWKV, that combines the advantages of both RNNs and transformers, and that has been recently integrated into the Hugging Face transformers library.
https://github.com/huggingface/blog/blob/main/rwkv.md blog/rwkv.md at main · huggingface/blog
Ryght’s Journey to Empower Healthcare and Life Sciences with Expert Support from Hugging Face
[!NOTE] This is a guest blog post by the Ryght team.

Who is Ryght?
Ryght is building an enterprise-grade generative AI platform tailored for the healthcare and life sciences sectors. Today is their official launch of Ryght Preview, now publicly available for all.

Life science companies are amassing a wealth of data from diverse sources (lab data, EMR, genomics, claims, pharmacy, clinical, etc.), but analysis of that data is archaic, requiring large teams for everything from simple queries to developing useful ML models. There is huge demand for actionable knowledge to drive drug development, clinical trials, and commercial activity, and the rise of precision medicine is only accelerating this demand.
SafeCoder vs. Closed-source Code Assistants
For decades, software developers have designed methodologies, processes, and tools that help them improve code quality and increase productivity. For instance, agile, test-driven development, code reviews, and CI/CD are now staples in the software industry.

In "How Google Tests Software" (Addison-Wesley, 2012), Google reports that fixing a bug during system tests - the final testing stage - is 1000x more expensive than fixing it at the unit testing stage. This puts much pressure on developers - the first link in the chain - to write quality code from the get-go.

For all the hype surrounding generative AI, code generation seems a promising way to help developers deliver better code fast. Indeed, early studies show that managed services like GitHub Copilot or Amazon CodeWhisperer help developers be more productive.

However, these services rely on closed-source models that can't be customized to your technical culture and processes. Hugging Face released SafeCoder a few weeks ago to fix this. SafeCoder is a code assistant solution built for the enterprise that gives you state-of-the-art models, transparency, customizability, IT flexibility, and privacy.

In this post, we'll compare SafeCoder to closed-source services and highlight the benefits you can expect from our solution.
https://github.com/huggingface/blog/blob/main/safecoder-vs-closed-source-code-assistants.md blog/safecoder-vs-closed-source-code-assistants.md at main · huggingface/blog