HF-hub - Share and discover more about AI with social posts from the community.huggingface/OpenAi
Share and discover more about AI with social posts from the community.huggingface/OpenAi
🚀 OV-DINO (Open-Vocabulary Detection with Instance-level Noise Optimization)

New approach to open-vocabulary object detection. It improves the ability of vision models to detect and identify objects in images, even objects outside training data.

🤩 SAM2 integration in Demo👇
VGGHeads

- Gradio demo: https://huggingface.co/spaces/okupyn/vgg_heads
- 5 visualization: 'Full', 'Head Boxes', 'Face Landmarks', 'Head Mesh', 'Head Pose'
- Model predicts various aspects of head detection & reconstruction perfectly
- VR, gaming, & animation uses
- 2D image inputs for 3D head outputs
📣 LLM-DetectAIve: Fine grained detection of machine-generated text

Classifies any text into 4 categories: Human-written, Machine-generated, Machine-written machine-humanized, and Human-written machine-polished💡😀

More nuanced compared to the current binary classification sota
Text-to-Single-ID Generation
🚀🚀🚀Quick start:
1. Enter a text prompt (Chinese or English), Upload an image with a face, and Click the Run button.
2. (Optional) You can also upload an image as the style reference for the results. 🤗
💡💡💡Tips:
1. Try to avoid creating too small faces, as this may lead to some artifacts. (Currently, the short side length of the generated image is limited to 512)
2. It's a good idea to upload multiple reference photos of your face to improve the prompt and ID consistency. Additional references can be uploaded in the "ID supplements".
3. The appropriate values of "Face ID Scale" and "Face Structure Scale" are important for balancing the ID and text alignment. We recommend using "Face ID Scale" (0.5~0.7) and "Face Structure Scale" (0.0~0.4). https://huggingface.co/spaces/Junjie96/UniPortrait UniPortrait - a Hugging Face Space by Junjie96
Introducing FalconMamba 7B: An attention-free 7B model which is pretty strong!

🤯Can process unlimited sequence lengths, outperforms traditional models, and fit on a single 24GB GPU.

Open-source and available on HF🤗. FalconMamba-7b Gradio Demo:https://huggingface.co/spaces/tiiuae/falcon-mamba-playground Falcon Mamba Playground - a Hugging Face Space by tiiuae
VGGHeads

🤯 Model performs simultaneous head detections and head meshes reconstruction (for multiple faces!) from a single image in a single step.

Runs on a CPU!🚀 More+Link👇https://pbs.twimg.com/media/GUyFx9WWgAAorFY?format=jpg&name=small
Online demos for BiRefNet on
@huggingface
Spaces!

Is this the best background removal model out there? 🤯
MIT licensed. 5.5G GPU memory needed for inference for 1024x1024 images.🤩
BiRefNet

🔥 Gradio Demo 1 with ImageSlider output: https://huggingface.co/spaces/not-lain/background-removal

Gradio demo 2 by the author 🙌
https://huggingface.co/spaces/ZhengPeng7/BiRefNet_demo Background Removal - a Hugging Face Space by not-lain
NEW and Hot: AuraSR Upscaler

- 600M parameter
- Based on GigaGAN paper from Adobe
- GANs are much faster than diffusion upscaling
- Upscaling to 1024px in 1/4th of a second

Model and Demo are up on Huggingface Hub. Great work by Fal AI and Gokay Aydogan, respectively.
🔥 AuraSR is a GAN-based Super-Res upscaler for generated images, a variation of the GigaGAN paper for image-conditioned upscaling.

Demo by @NONDA30: https://huggingface.co/spaces/gokaygokay/AuraSR

🔥 Torch implementation is based on the unofficial lucidrains/gigagan-pytorch repository: https://github.com/lucidrains/gigagan-pytorch?ref=blog.fal.ai AuraSR-v2 - a Hugging Face Space by gokaygokay
How to run Yi chat models with an API
Posted November 23, 2023 by @nateraw

The Yi series models are large language models trained from scratch by developers at 01.AI. Today, they’ve released two new models: Yi-6B-Chat and Yi-34B-Chat. These models extend the base models, Yi-6B and Yi-34B, and are fine-tuned for chat completion.

Yi-34B currently holds the state-of-the-art for most benchmarks, beating larger models like Llama-70B..
Run Code Llama 70B with an API
Posted January 30, 2024 by @cbh123

Code Llama is a code generation model built on top of Llama 2. It can generate code and natural language about code in many programming languages, including Python, JavaScript, TypeScript, C++, Java, PHP, C#, Bash and more.

Today, Meta announced a more powerful new version of Code Llama with 70 billion parameters. It’s one of the highest performing open models. Meta reports a 67.8 on HumanEval, which beats zero-shot GPT-4.

With Replicate, you can run Code Llama 70B in the cloud with one line of code.

Contents
Contents
Code Llama 70B variants
Run Code Llama 70B with JavaScript
Run Code Llama 70B with Python
Run Code Llama 70B with cURL
Keep up to speed
Code Llama 70B variants
There are three variants of Code Llama 70B. The code snippets in this guide use codellama-70b-instruct, but all three variants are available on Replicate:

Code Llama 70B Base is the foundation model.
Code Llama 70B Python is trained on Python code.
Code Llama 70B Instruct is fine-tuned for understanding natural language instructions.
Run Snowflake Arctic with an API
Posted April 23, 2024 by @cbh123

Snowflake Arctic is a new open-source language model from Snowflake. Arctic is on-par or better than both Llama 3 8B and Llama 2 70B on all metrics while using less than half of the training compute budget.

It's massive. At 480B, Arctic is the biggest open-source model to date. As expected from a model from Snowflake, it's good at SQL and other coding tasks, and it has a liberal Apache 2.0 license.

With Replicate, you can run Arctic in the cloud with one line of code.
Picking an SD3 version
Stability AI have packaged up SD3 Medium in different ways to make sure it can run on as many devices as possible.

SD3 uses three different text encoders. (The text encoder is the part that takes your prompt and puts it into a format the model can understand). One of these new text encoders is really big – meaning it uses a lot of memory. If you’re looking at the SD3 Hugging Face weights, you’ll see four options with different text encoder configurations. You should choose which one to use based on your available VRAM.

sd3_medium_incl_clips_t5xxlfp8.safetensors
This encoder contains the model weights, the two CLIP text encoders and the large T5-XXL model in a compressed fp8 format. We recommend these weights for simplicity and best results.

sd3_medium_incl_clips_t5xxlfp16.safetensors
The same as sd3_medium_incl_clips_t5xxlfp8.safetensors, except the T5 part isn’t compressed as much. By using fp16 instead of fp8, you’ll get a slight improvement in your image quality. This improvement comes at the cost of higher memory usage.

sd3_medium_incl_clips.safetensors
This version does away with the T5 element altogether. It includes the weights with just the two CLIP text encoders. This is a good option if you do not have much VRAM, but your results might be very different from the full version. You might notice that this version doesn’t follow your prompts as closely, and it may also reduce the quality of text in images.

sd3_medium.safetensors
This model is just the base weights without any text encoders. If you use these weights, make sure you’re loading the text encoders separately. Stability AI have provided an example ComfyUI workflow for this.
How to get the best results from Stable Diffusion 3?
Stability AI recently released the weights for Stable Diffusion 3 Medium, a 2 billion parameter text-to-image model that excels at photorealism, typography, and prompt following.

You can run the official Stable Diffusion 3 model on Replicate, and it is available for commercial use. We have also open-sourced our Diffusers and ComfyUI implementations (read our guide to ComfyUI).

In this blog post we’ll show you how to use Stable Diffusion 3 (SD3) to get the best images, including how to prompt SD3, which is a bit different from previous Stable Diffusion models.

To help you experiment, we’ve created an SD3 explorer model that exposes all of the settings we discuss here.https://d31rfu1d3w8e4q.cloudfront.net/static/blog/get-the-best-from-stable-diffusion-3/explorer-screenshot.png
What makes FLUX.1 special?
FLUX.1 models have state-of-the-art performance in prompt following, visual quality, image detail, and output diversity. Here are some particular areas where we’ve been impressed:

Text! Unlike older models that often messed up similar-looking letters, Flux can handle tricky words with repeated letters. This makes it great for designs where text needs to be accurate. Check out this Black Forest Flux Schnell gateau:https://d31rfu1d3w8e4q.cloudfront.net/static/blog/flux/cake-text.png
How to fine-tune: Focus on effective datasets?
This is the third blog post in a series about adapting open source large language models (LLMs). In this post, we explore some rules of thumb for curating a good training dataset.

In Part 1, we took a look at prevalent approaches for adapting language models to domain data.
In Part 2, we discussed how to determine if fine-tuning is the right approach for your use case.
Introduction

Fine-tuning LLMs is a mix of art and science, with best practices in the field still emerging. In this blog post, we’ll highlight design variables for fine-tuning and give directional guidance on best practices we’ve seen so far to fine-tune models with resource constraints. We recommend using the information below as a starting point to strategize your fine-tuning experiments.

Full fine-tuning vs. parameter-efficient fine-tuning (PEFT)

Both full fine-tuning and PEFT have shown improvements in downstream performance when applied to new domains in both academic and practical settings. Choosing one boils down to compute available (in GPU hours and GPU memory), performance on tasks other than the target downstream task (the learning-forgetting tradeoff) and human annotation costs.

Full fine-tuning is more prone to suffer from two problems: model collapse and catastrophic forgetting. Model collapse is where the model output converges to a limited set of outputs and the tail of the original content distribution disappears. Catastrophic forgetting, as discussed in Part 1 of this series, leads to the model losing its abilities. Some early empirical studies show that full fine-tuning techniques are more prone to the above mentioned issues as compared to PEFT techniques, though more research needs to be done.

PEFT techniques serve as natural regularizers for fine-tuning by design. PEFT often costs relatively less compute to train a downstream model and is much more accessible for a resource-constrained scenario with limited dataset sizes. In some cases, full fine-tuning has shown better performance at the specific task of interest, often at the cost of forgetting some of the capabilities of the original model. This “learning-forgetting” tradeoff between the specific downstream task performance and performance on other tasks is explored deeply in the comparison of LoRA and full fine-tuning in this paper.

Given resource constraints, PEFT techniques will likely give a better performance boost/cost ratio as compared to full fine-tuning. If downstream performance is of paramount importance with resource constraints, full fine-tuning will be the most effective. In either scenario, the key is to create a high-quality dataset keeping the following key principles in mind.
How NVIDIA is using structured weight pruning and knowledge distillation to build new Llama models
Large language models like Llama can move with impressive speed and precision to handle a variety of challenging tasks, such as generating code, solving math problems, and helping doctors make life-saving medical decisions. Open source models are already leading to incredible breakthroughs across disciplines—however, they’re resource-intensive to deploy. It’s important that we work collaboratively across the industry to make it even easier for people to tap into the game-changing potential of LLMs.

Last month, we announced Llama 3.1, which includes our largest model yet, the 405B, as well as two smaller models with 70 billion and 8 billion parameters, respectively. Smaller models from a larger relative are typically cheaper to deploy to the masses and perform well across many language tasks. In a new research paper, our partners at NVIDIA explore how various large models can be made smaller using structured weight pruning and knowledge distillation—without having to train a new model from scratch. Working with Llama 3.1 8B, the team shares how it created Llama-Minitron 3.1 4B, its first work within the Llama 3.1 open source family.

Learn more about this work, and get the pruning and distillation strategy and additional resources by reading NVIDIA’s blog post.https://ai.meta.com/blog/nvidia-llama/