Share and discover more about AI with social posts from the community.huggingface/OpenAi
SmolLM Instruct v0.2 - 135M, 360M and 1.7B parameter instruction tuned Small LMs, Apache 2.0 licensed. Closing the gap to bring intelligence closer to thought (<500 ms per generation)! 🔥

The models are optimised to run on-device with WebGPU support (from MLC and ONNX Runtime) and llama.cpp.

Run them on your Mac, browser, GPU, CPU - it works blazingly fast.

We provide already converted/ quantised - GGUFs, MLC and ONNX checkpoints 🐐

What's new?

We train SmolLM base models on a new synthetic dataset of 2,000 simple everyday conversations we generated by llama3.1-70B -> everyday-conversations-llama3.1-2k

and existing datasets like Magpie-Pro-300K-Filtered by
@argilla_io
, self-oss-instruct-sc2-exec-filter-50k, and a small subset of OpenHermes-2.5 from
@NousResearch


Bonus: We release the fine-tuning scripts we used to train these checkpoints, so that you can fine-tune them for your own use-cases too. ⚡️

Enjoy! and looking forward to what you build with these https://huggingface.co/collections/HuggingFaceTB/local-smollms-66c0f3b2a15b4eed7fb198d0 💻 Local SmolLMs - a HuggingFaceTB Collection
How good are you at spotting AI-generated images?

Find out by playing Fake Insects 🐞 a Game where you need to identify which insects are fake (AI generated). Good luck & share your best score in the comments!

victor/fake-insectshttps://cdn-uploads.huggingface.co/production/uploads/5f17f0a0925b9863e28ad517/Gn3CqEf83euvbSF1089W5.png
Brief backstory: Before diving into AI, I spent over a decade working in ecological fields such as the conservation corps, biodynamic farming, and natural habitat restoration. This background instilled in me a deep concern about the environmental impact of scaling AI without sustainable practices.

Driven by this concern, I've spent months planning and experimenting to make my AI work more eco-friendly. I'm thrilled to announce that I've successfully transitioned my entire operation to run on 100% sustainable solar power!

My current setup includes multiple linked Mac Pro tower desktops and custom code built from open-source libraries. While it's a bit experimental, this configuration is working great for my needs. All my LLM research, development, and client services now run exclusively on solar energy.

I'm curious if anyone else here has experimented with renewable energy for their LLM work?

For those interested in more details, I've written a brief blog post about this journey here https://medium.com/@betalabsllm/powering-the-future-be-ta-labs-revolutionary-100-solar-powered-ai-operation-444433e61d43 Powering the Future: Be.Ta Labs’ Revolutionary 100% Solar-Powered AI Operation
Ghost 8B Beta (1608), a top-performing language model with unmatched multilingual support and cost efficiency.

Key Highlights:
- Superior Performance: Outperforms Llama 3.1 8B Instruct, GPT-3.5 Turbo, Claude 3 Opus, GPT-4, and more in winrate scores.
- Expanded Language Support: Now supports 16 languages, including English, Vietnamese, Spanish, Chinese, and more.
- Enhanced Capabilities: Improved math, reasoning, and instruction-following for better task handling.

With two context options (8k and 128k), Ghost 8B Beta is perfect for complex, multilingual applications, balancing power and cost-effectiveness.

🔗 Learn More: https://ghost-x.org/docs/models/ghost-8b-beta
ghost-x/ghost-8b-beta-668ead6179f93be717db4542 Ghost 8B Beta
Flux.1 + LoRA Tutorial: The duo that could replace Midjourney (Prompting guide included) | by
I made my first Flux Lora style on Civitai : r/StableDiffusion
New Makima flux lora | image created by WhiteZ | Tensor.Art

flux_makima, woman, collared shirt, white shirt, black necktie, black pants, red hair, single braid, in the office holding a sign with the text: "tensor art", smiling evilly , pixiv
[FLUX LORA INCLUDED] Art Nouveau
New Flux Lora - Flat Colour Anime v3 : r/StableDiffusion
Hi all. I just released a new version of my Flat Color Anime lora updated for Flux Dev.

https://civitai.com/models/180891/flat-color-anime?modelVersionId=724241

This was something I worked on for SDXL a while ago to try and get a clean anime style similar to some of the 1.5 checkpoints I liked. I got the inspiration with Flux to have a go at updating it and I'm loving how the results are coming out so thought I'd share. I'll be refining it over the next days/weeks so any feedback is appreciated. Enjoy! https://www.reddit.com/r/StableDiffusion/comments/1esrzf1/new_flux_lora_flat_colour_anime_v3/https://preview.redd.it/new-flux-lora-flat-colour-anime-v3-v0-mm7fsj1c4tid1.png?width=832&format=png&auto=webp&s=7022ef6a073cfa9a7367f9ca47c7ca64cec72424 Flat Color Anime v3.1 - v3.1 Flux Dev | Stable Diffusion LoRA | Civitai
Online FLUX! fal.ai integrates ControlNet, providing online LoRA training
Exciting news for drawing enthusiasts! "FLUX Online Edition" at fal.ai has introduced a series of powerful new features, including important modules like ControlNet and LoRA. The best part? These features are ready to use straight out of the box, without the need for complex configurations.

Feature Highlights

ControlNet: This new feature makes it effortless to modify character expressions in images and even control objects within the scene.

LoRA Model Online Training: Users can train their own LoRA models online and share them via links, allowing creativity to spread widely.
The robust capabilities of FLUX, combined with third-party developed ControlNet and LoRA, make drawing more flexible and personalized. Netizens have already started creating a variety of styles with these tools, from cyberpunk to Chinese traditional, limited only by imagination.

The ease of use of FLUX Online Edition is one of the key reasons for its popularity. Users do not need to deploy models or set up workflows themselves; they can simply use these advanced drawing tools through the fal.ai platform.

fal.ai is one of the official online inference platforms cooperated with the FLUX development team, Black Forest Lab. Team member Jonathan Fischoff actively shares works created by users on the platform on Twitter, showcasing the diverse applications of FLUX and LoRA models.

Netizens have reacted enthusiastically to the new features of FLUX Online Edition, looking forward to combining ControlNet and LoRA with image-to-image functions to create more diverse visual effects.

Fischoff demonstrated images generated by FLUX using ControlNet contour control, and how to quickly generate new images with different styles by modifying prompt words.
The Imp project aims to provide a family of a strong multimodal small language models (MSLMs). Our imp-v1-3b is a strong MSLM with only 3B parameters, which is build upon a small yet powerful SLM Phi-2 (2.7B) and a powerful visual encoder SigLIP (0.4B), and trained on the LLaVA-v1.5 training set.

As shown in the image below, imp-v1-3b significantly outperforms the counterparts of similar model sizes, and even achieves slightly better performance than the strong LLaVA-7B model on various multimodal benchmarks.
https://huggingface.co/MILVLG/imp-v1-3b MILVLG/imp-v1-3b · Hugging Face
alvdansen
/
flux-koda
#StableDiffusion #Flux #AI #ComfyUI
Model description
Koda captures the nostalgic essence of early 1990s photography, evoking memories of disposable cameras and carefree travels. It specializes in creating images with a distinct vintage quality, characterized by slightly washed-out colors, soft focus, and the occasional light leak or film grain. The model excels at producing slice-of-life scenes that feel spontaneous and candid, as if plucked from a family photo album or a backpacker's travel diary.

Words that can highlight interesting nuances within the model:

kodachrome, blurry, realistic, still life, depth of field, scenery, no humans, monochrome, greyscale, traditional media, horizon, looking at viewer, light particles, shadow

https://cdn-uploads.huggingface.co/production/uploads/635dd6cd4fabde0df74aeae6/7CqMzFOlH6yoM-NpQdYDs.png
ostris / flux-dev-lora-trainer

Train ostris/flux-dev-lora-trainer
Trainings for this model run on Nvidia H100 GPU hardware, which costs $0.001528 per second.

If you haven’t yet trained a model on Replicate, we recommend you read one of the following guides.

Fine-tune an image model
Fine-tune SDXL with your own images
Create training
The easiest way to train ostris/flux-dev-lora-trainer is to use the form below. Upon creation, you will be redirected to the training detail page where you can monitor your training's progress, and eventually download the weights and run the trained model.https://replicate.com/ostris/flux-dev-lora-trainer/train ostris/flux-dev-lora-trainer – Replicate
The flux-lora-collection is a series of LoRA training checkpoints for the FLUX.1-dev model released by the XLabs AI team. This collection supports the generation of images in various styles and themes, such as anthropomorphism, anime, and Disney styles, offering high customizability and innovation.
https://huggingface.co/XLabs-AI/flux-lora-collection XLabs-AI/flux-lora-collection · Hugging Face