HF-hub - Share and discover more about AI with social posts from the community.huggingface/OpenAi
Share and discover more about AI with social posts from the community.huggingface/OpenAi
he N Implementation Details of RLHF with PPO
RLHF / ChatGPT has been a popular research topic these days. In our quest to research more on RLHF, this blog post attempts to do a reproduction of OpenAI’s 2019 original RLHF codebase at openai/lm-human-preferences. Despite its “tensorflow-1.x-ness,” OpenAI’s original codebase is very well-evaluated and benchmarked, making it a good place to study RLHF implementation engineering details.

We aim to:

reproduce OAI’s results in stylistic tasks and match the learning curves of openai/lm-human-preferences.
present a checklist of implementation details, similar to the spirit of The 37 Implementation Details of Proximal Policy Optimization; Debugging RL, Without the Agonizing Pain.
provide a simple-to-read and minimal reference implementation of RLHF;
Probabilistic Time Series Forecasting with 🤗 Transformers
<script async defer src="https://unpkg.com/medium-zoom-element@0/dist/medium-zoom-element.min.js"></script> Open In Colab
Introduction
Time series forecasting is an essential scientific and business problem and as such has also seen a lot of innovation recently with the use of deep learning based models in addition to the classical methods. An important difference between classical methods like ARIMA and novel deep learning methods is the following.
Google Cloud TPUs made available to Hugging Face users
Google Cloud TPUs made available to Hugging Face users

We're excited to share some great news! AI builders are now able to accelerate their applications with Google Cloud TPUs on Hugging Face Inference Endpoints and Spaces!

For those who might not be familiar, TPUs are custom-made AI hardware designed by Google. They are known for their ability to scale cost-effectively and deliver impressive performance across various AI workloads. This hardware has played a crucial role in some of Google's latest innovations, including the development of the Gemma 2 open models. We are excited to announce that TPUs will now be available for use in Inference Endpoints and Spaces.

This is a big step in our ongoing collaboration to provide you with the best tools and resources for your AI projects. We're really looking forward to seeing what amazing things you'll create with this new capability!
Train your first Decision Transformer
In a previous post, we announced the launch of Decision Transformers in the transformers library. This new technique of using a Transformer as a Decision-making model is getting increasingly popular.

So today, you’ll learn to train your first Offline Decision Transformer model from scratch to make a half-cheetah run. We'll train it directly on a Google Colab that you can find here 👉 https://github.com/huggingface/blog/blob/main/notebooks/101_train-decision-transformers.ipynb

*An "expert" Decision Transformers model, learned using offline RL in the Gym HalfCheetah environment.*
Sounds exciting? Let's get started!

What are Decision Transformers?
Training Decision Transformers
Loading the dataset and building the Custom Data Collator
Training the Decision Transformer model with a 🤗 transformers Trainer
Conclusion
What’s next?
References blog/notebooks/101_train-decision-transformers.ipynb at main · huggingface/blog
Easily Train Models with H100 GPUs on NVIDIA DGX Cloud
Today, we are thrilled to announce the launch of Train on DGX Cloud, a new service on the Hugging Face Hub, available to Enterprise Hub organizations. Train on DGX Cloud makes it easy to use open models with the accelerated compute infrastructure of NVIDIA DGX Cloud. Together, we built Train on DGX Cloud so that Enterprise Hub users can easily access the latest NVIDIA H100 Tensor Core GPUs, to fine-tune popular Generative AI models like Llama, Mistral, and Stable Diffusion, in just a few clicks within the Hugging Face Hub.

Thumbnail
Optimizing Stable Diffusion for Intel CPUs with NNCF and 🤗 Optimum
Latent Diffusion models are game changers when it comes to solving text-to-image generation problems. Stable Diffusion is one of the most famous examples that got wide adoption in the community and industry. The idea behind the Stable Diffusion model is simple and compelling: you generate an image from a noise vector in multiple small steps refining the noise to a latent image representation. This approach works very well, but it can take a long time to generate an image if you do not have access to powerful GPUs.

Through the past five years, OpenVINO Toolkit encapsulated many features for high-performance inference. Initially designed for Computer Vision models, it still dominates in this domain showing best-in-class inference performance for many contemporary models, including Stable Diffusion. However, optimizing Stable Diffusion models for resource-constraint applications requires going far beyond just runtime optimizations. And this is where model optimization capabilities from OpenVINO Neural Network Compression Framework (NNCF) come into play.

In this blog post, we will outline the problems of optimizing Stable Diffusion models and propose a workflow that substantially reduces the latency of such models when running on a resource-constrained HW such as CPU. In particular, we achieved 5.1x inference acceleration and 4x model footprint reduction compared to PyTorch.
Training and Finetuning Embedding Models with Sentence Transformers v3
Sentence Transformers is a Python library for using and training embedding models for a wide range of applications, such as retrieval augmented generation, semantic search, semantic textual similarity, paraphrase mining, and more. Its v3.0 update is the largest since the project's inception, introducing a new training approach. In this blogpost, I'll show you how to use it to finetune Sentence Transformer models to improve their performance on specific tasks. You can also use this method to train new Sentence Transformer models from scratch.

Finetuning Sentence Transformers now involves several components, including datasets, loss functions, training arguments, evaluators, and the new trainer itself. I'll go through each of these components in detail and provide examples of how to use them to train effective models.
rain your ControlNet with diffusers 🧨
Introduction
ControlNet is a neural network structure that allows fine-grained control of diffusion models by adding extra conditions. The technique debuted with the paper Adding Conditional Control to Text-to-Image Diffusion Models, and quickly took over the open-source diffusion community author's release of 8 different conditions to control Stable Diffusion v1-5, including pose estimations, depth maps, canny edges, sketches, and more.
Finetune Stable Diffusion Models with DDPO via TRL
Introduction
Diffusion models (e.g., DALL-E 2, Stable Diffusion) are a class of generative models that are widely successful at generating images most notably of the photorealistic kind. However, the images generated by these models may not always be on par with human preference or human intention. Thus arises the alignment problem i.e. how does one go about making sure that the outputs of a model are aligned with human preferences like “quality” or that outputs are aligned with intent that is hard to express via prompts? This is where Reinforcement Learning comes into the picture.

In the world of Large Language Models (LLMs), Reinforcement learning (RL) has proven to become a very effective tool for aligning said models to human preferences. It’s one of the main recipes behind the superior performance of systems like ChatGPT. More precisely, RL is the critical ingredient of Reinforcement Learning from Human Feedback (RLHF), which makes ChatGPT chat like human beings.
Fine-tuning 20B LLMs with RLHF on a 24GB consumer GPU
We are excited to officially release the integration of trl with peft to make Large Language Model (LLM) fine-tuning with Reinforcement Learning more accessible to anyone! In this post, we explain why this is a competitive alternative to existing fine-tuning approaches.

Note peft is a general tool that can be applied to many ML use-cases but it’s particularly interesting for RLHF as this method is especially memory-hungry!

If you want to directly deep dive into the code, check out the example scripts directly on the documentation page of TRL.
How to Install and Use the Hugging Face Unity API
The Hugging Face Unity API is an easy-to-use integration of the Hugging Face Inference API, allowing developers to access and use Hugging Face AI models in their Unity projects. In this blog post, we'll walk through the steps to install and use the Hugging Face Unity API.

Installation
Open your Unity project
Go to Window -> Package Manager
Click + and select Add Package from git URL
Enter https://github.com/huggingface/unity-api.git
Once installed, the Unity API wizard should pop up. If not, go to Window -> Hugging Face API Wizard


Enter your API key. Your API key can be created in your Hugging Face account settings.
Test the API key by clicking Test API key in the API Wizard.
Optionally, change the model endpoints to change which model to use. The model endpoint for any model that supports the inference API can be found by going to the model on the Hugging Face website, clicking Deploy -> Inference API, and copying the url from the API_URL field.
Configure advanced settings if desired. For up-to-date information, visit the project repository at https://github.com/huggingface/unity-api
To see examples of how to use the API, click Install Examples. You can now close the API Wizard. GitHub - huggingface/unity-api
AI Speech Recognition in Unity
Open Source AI Game Jam

Introduction
This tutorial guides you through the process of implementing state-of-the-art Speech Recognition in your Unity game using the Hugging Face Unity API. This feature can be used for giving commands, speaking to an NPC, improving accessibility, or any other functionality where converting spoken words to text may be useful.

To try Speech Recognition in Unity for yourself, check out the live demo in itch.io.

Prerequisites
This tutorial assumes basic knowledge of Unity. It also requires you to have installed the Hugging Face Unity API. For instructions on setting up the API, check out our earlier blog post.
How to host a Unity game in a Space
Did you know you can host a Unity game in a Hugging Face Space? No? Well, you can!

Hugging Face Spaces are an easy way to build, host, and share demos. While they are typically used for Machine Learning demos, they can also host playable Unity games. Here are some examples:

Huggy
Farming Game
Unity API Demo
Here's how you can host your own Unity game in a Space.

Step 1: Create a Space using the Static HTML template
First, navigate to Hugging Face Spaces to create a space.



Select the "Static HTML" template, give your Space a name, and create it.



Step 2: Use Git to Clone the Space
Clone your newly created Space to your local machine using Git. You can do this by running the following command in your terminal or command prompt:

git clone https://huggingface.co/spaces/{your-username}/{your-space-name}
Step 3: Open your Unity Project
Open the Unity project you want to host in your Space.



Step 4: Switch the Build Target to WebGL
Navigate to File > Build Settings and switch the Build Target to WebGL.



Step 5: Open Player Settings
In the Build Settings window, click the "Player Settings" button to open the Player Settings panel.



Step 6: Optionally, Download the Hugging Face Unity WebGL Template
You can enhance your game's appearance in a Space by downloading the Hugging Face Unity WebGL template, available here. Just download the repository and drop it in your project files.

Then, in the Player Settings panel, switch the WebGL template to Hugging Face. To do so, in Player Settings, click "Resolution and Presentation", then select the Hugging Face WebGL template.



Step 7: Change the Compression Format to Disabled
In the Player Settings panel, navigate to the "Publishing Settings" section and change the Compression Format to "Disabled".



Step 8: Build your Project
Return to the Build Settings window and click the "Build" button. Choose a location to save your build files, and Unity will build the project for WebGL.



Step 9: Copy the Contents of the Build Folder
After the build process is finished, navigate to the folder containing your build files. Copy the files in the build folder to the repository you cloned in Step 2.



Step 10: Enable Git-LFS for Large File Storage
Navigate to your repository. Use the following commands to track large build files.

git lfs install
git lfs track Build/*
Step 11: Push your Changes
Finally, use the following Git commands to push your changes:

git add .
git commit -m "Add Unity WebGL build files"
git push
Done!
Make LLM Fine-tuning 2x faster with Unsloth and 🤗 TRL
Pulling your hair out because LLM fine-tuning is taking forever? In this post, we introduce a lightweight tool developed by the community to make LLM fine-tuning go super fast!

Before diving into Unsloth, it may be helpful to read our QLoRA blog post, or be familiar with LLM fine-tuning using the 🤗 PEFT library.

Unsloth - 2x faster, -40% memory usage, 0% accuracy degradation
Unsloth is a lightweight library for faster LLM fine-tuning which is fully compatible with the Hugging Face ecosystem (Hub, transformers, PEFT, TRL). The library is actively developed by the Unsloth team (Daniel and Michael) and the open source community. The library supports most NVIDIA GPUs –from GTX 1070 all the way up to H100s–, and can be used with the entire trainer suite from the TRL library (SFTTrainer, DPOTrainer, PPOTrainer). At the time of writing, Unsloth supports the Llama (CodeLlama, Yi, etc) and Mistral architectures.

Unsloth works by overwriting some parts of the modeling code with optimized operations. By manually deriving backpropagation steps and rewriting all Pytorch modules into Triton kernels, Unsloth can both reduce memory usage and make fine-tuning faster. Crucially, accuracy degradation is 0% with respect to normal QLoRA, because no approximations are made in the optimized code.
AI Policy @🤗: Comments on U.S. National AI Research Resource Interim Report
In late June 2022, Hugging Face submitted a response to the White House Office of Science and Technology Policy and National Science Foundation’s Request for Information on a roadmap for implementing the National Artificial Intelligence Research Resource (NAIRR) Task Force’s interim report findings. As a platform working to democratize machine learning by empowering all backgrounds to contribute to AI, we strongly support NAIRR’s efforts.
Using Machine Learning to Aid Survivors and Race through Time
On February 6, 2023, earthquakes measuring 7.7 and 7.6 hit South Eastern Turkey, affecting 10 cities and resulting in more than 42,000 deaths and 120,000 injured as of February 21.

A few hours after the earthquake, a group of programmers started a Discord server to roll out an application called afetharita, literally meaning, disaster map. This application would serve search & rescue teams and volunteers to find survivors and bring them help. The need for such an app arose when survivors posted screenshots of texts with their addresses and what they needed (including rescue) on social media. Some survivors also tweeted what they needed so their relatives knew they were alive and that they need rescue. Needing to extract information from these tweets, we developed various applications to turn them into structured data and raced against time in developing and deploying these apps.
Deep Dive: Vision Transformers On Hugging Face Optimum Graphcore
This blog post will show how easy it is to fine-tune pre-trained Transformer models for your dataset using the Hugging Face Optimum library on Graphcore Intelligence Processing Units (IPUs). As an example, we will show a step-by-step guide and provide a notebook that takes a large, widely-used chest X-ray dataset and trains a vision transformer (ViT) model.

Introducing vision transformer (ViT) models
In 2017 a group of Google AI researchers published a paper introducing the transformer model architecture. Characterised by a novel self-attention mechanism, transformers were proposed as a new and efficient group of models for language applications. Indeed, in the last five years, transformers have seen explosive popularity and are now accepted as the de facto standard for natural language processing (NLP).
A Dive into Vision-Language Models
Human learning is inherently multi-modal as jointly leveraging multiple senses helps us understand and analyze new information better. Unsurprisingly, recent advances in multi-modal learning take inspiration from the effectiveness of this process to create models that can process and link information using various modalities such as image, video, text, audio, body gestures, facial expressions, and physiological signals.

Since 2021, we’ve seen an increased interest in models that combine vision and language modalities (also called joint vision-language models), such as OpenAI’s CLIP. Joint vision-language models have shown particularly impressive capabilities in very challenging tasks such as image captioning, text-guided image generation and manipulation, and visual question-answering. This field continues to evolve, and so does its effectiveness in improving zero-shot generalization leading to various practical use cases.
Kakao Brain’s Open Source ViT, ALIGN, and the New COYO Text-Image Dataset
Kakao Brain and Hugging Face are excited to release a new open-source image-text dataset COYO of 700 million pairs and two new visual language models trained on it, ViT and ALIGN. This is the first time ever the ALIGN model is made public for free and open-source use and the first release of ViT and ALIGN models that come with the train dataset.

Kakao Brain’s ViT and ALIGN models follow the same architecture and hyperparameters as provided in the original respective Google models but are trained on the open source COYO dataset. Google’s ViT and ALIGN models, while trained on huge datasets (ViT trained on 300 million images and ALIGN trained on 1.8 billion image-text pairs respectively), cannot be replicated because the datasets are not public. This contribution is particularly valuable to researchers who want to reproduce visual language modeling with access to the data as well.
What is a Vision Language Model?
Vision language models are broadly defined as multimodal models that can learn from images and text. They are a type of generative models that take image and text inputs, and generate text outputs. Large vision language models have good zero-shot capabilities, generalize well, and can work with many types of images, including documents, web pages, and more. The use cases include chatting about images, image recognition via instructions, visual question answering, document understanding, image captioning, and others. Some vision language models can also capture spatial properties in an image. These models can output bounding boxes or segmentation masks when prompted to detect or segment a particular subject, or they can localize different entities or answer questions about their relative or absolute positions. There’s a lot of diversity within the existing set of large vision language models, the data they were trained on, how they encode images, and, thus, their capabilities.