Share and discover more about AI with social posts from the community.huggingface/OpenAi
How to Use SwarmUI & Stable Diffusion 3 on Cloud Services Kaggle (free),
In this video, I demonstrate how to install and use #SwarmUI on cloud services. If you lack a powerful GPU or wish to harness more GPU power, this video is essential. You’ll learn how to install and utilize SwarmUI, one of the most powerful Generative AI interfaces, on Massed Compute, RunPod, and Kaggle (which offers free dual T4 GPU access for 30 hours weekly). This tutorial will enable you to use SwarmUI on cloud GPU providers as easily and efficiently as on your local PC. Moreover, I will show how to use Stable Diffusion 3 (#SD3) on cloud. SwarmUI uses #ComfyUI backend.
🔗 The Public Post (no login or account required) Shown In The Video With The Links ➡️ https://www.patreon.com/posts/stableswarmui-3-106135985
🔗 Windows Tutorial for Learn How to Use SwarmUI ➡️ https://youtu.be/HKX8_F1Er_w
🔗 How to download models very fast to Massed Compute, RunPod and Kaggle and how to upload models or files to Hugging Face very fast tutorial ➡️ https://youtu.be/X5WVZ0NMaTg
🔗 SECourses Discord ➡️ https://discord.com/servers/software-engineering-courses-secourses-772774097734074388
🔗 Stable Diffusion GitHub Repo (Please Star, Fork and Watch) ➡️ https://github.com/FurkanGozukara/Stable-Diffusion SwarmUI Master Tutorial - Use Stable Diffusion 3 (SD3) and FLUX model with Amazing Performance | SECourses: Tutorials, Guides,…
Basic Usage of StableSwarmUI
So you want to know how to get started with StableSwarmUI, huh? It's easy!

For the most part, just download the installer and follow the instructions on screen. Everything explains itself, even the settings and parameters all have ? clickables that explain what they do!

Nonetheless, here's a step-by-step you can follow:

Installing
Step one: Install StableSwarmUI.

Once you've ran the basic program-installation, if all went well, it will open a web interface to select basic install settings.

Agree to the SD license
Pick a theme (I think default is best, but you got options)
Pick who the UI is for (usually just Yourself or Yourself on LAN)
Pick what backend(s) to install. If you already have ComfyUI or another backend you can skip this - if not, pick one. I recommend ComfyUI for local usage.
Pick any model(s) you want to download. If you already have some you can skip this, if not, I recommend SDXL 1.0.
Confirm you want the settings you selected, and install.
Once this is done, it should automatically redirect you to the main interface.

(You can close the server at any time by just closing that console window it pulls up, and you can start it again via the desktop icon, or the launch script in the folder).
https://github.com/Stability-AI/StableSwarmUI/blob/master/docs/Basic%20Usage.md StableSwarmUI/docs/Basic Usage.md at master · Stability-AI/StableSwarmUI
More on NVIDIA NIM at SIGGRAPH

At SIGGRAPH, NVIDIA also introduced generative AI models and NIM microservices for the OpenUSD framework to accelerate developers’ abilities to build highly accurate virtual worlds for the next evolution of AI.

To experience more than 100 NVIDIA NIM microservices with applications across industries, visit ai.nvidia.com.

Categories: Cloud | Deep Learning | Generative AI
Tags: #Artificial Intelligence | #DGX Cloud | #NVIDIA NIM | #SIGGRAPH 2024
Near-Instant Access to DGX Cloud Provides Accessible AI Acceleration

The NVIDIA DGX Cloud platform is purpose-built for generative AI, offering developers easy access to reliable accelerated computing infrastructure that can help them bring production-ready applications to market faster.

The platform provides scalable GPU resources that support every step of AI development, from prototype to production, without requiring developers to make long-term AI infrastructure commitments.

Hugging Face inference-as-a-service on NVIDIA DGX Cloud powered by NIM microservices offers easy access to compute resources that are optimized for AI deployment, enabling users to experiment with the latest AI models in an enterprise-grade environment.
What’s New in July 2024
We added 20+ models to the Hugging Face collection in the Azure AI model catalog in July. These included multilingual models (focus on Chinese, Dutch, Arabic, South-East Asian), embedding models, text generation (SLM and LLM) and models with a domain-specific focus (e.g., biomedical). The table below summarizes additions by task and notable features. Click model name to view related model cards on Azure AI for more details. In the next section, we’ll put the spotlight on a couple of models or model families that may be of particular interest to developers exploring SLMs or multilingual applications.
https://techcommunity.microsoft.com/t5/ai-ai-platform-blog/new-hugging-face-models-on-azure-ai-multilingual-slm-and-biomed/ba-p/4211881?wt.mc_id=twitter_4211881_organicsocial_reactor New Hugging Face Models on Azure AI: Multilingual, SLM and BioMed- July 2024 Update
AI unicorn Hugging Face acquires XetHub to manage huge AI models, aiming to host hundreds of millions. Meta's Llama 3.1 has 405B parameters, driving the need for more scalable solutions. XetHub's tools for efficient data management will integrate into Hugging Face's platform. #AI
Brief backstory: Before diving into AI, I spent over a decade working in ecological fields such as the conservation corps, biodynamic farming, and natural habitat restoration. This background instilled in me a deep concern about the environmental impact of scaling AI without sustainable practices.

Driven by this concern, I've spent months planning and experimenting to make my AI work more eco-friendly. I'm thrilled to announce that I've successfully transitioned my entire operation to run on 100% sustainable solar power!
How good are you at spotting AI-generated images?

Find out by playing Fake Insects 🐞 a Game where you need to identify which insects are fake (AI generated). Good luck & share your best score in the comments!
🚀 We’re excited to launch Ghost 8B Beta (1608), a top-performing language model with unmatched multilingual support and cost efficiency.

Key Highlights:
- Superior Performance: Outperforms Llama 3.1 8B Instruct, GPT-3.5 Turbo, Claude 3 Opus, GPT-4, and more in winrate scores.
- Expanded Language Support: Now supports 16 languages, including English, Vietnamese, Spanish, Chinese, and more.
- Enhanced Capabilities: Improved math, reasoning, and instruction-following for better task handling.

With two context options (8k and 128k), Ghost 8B Beta is perfect for complex, multilingual applications, balancing power and cost-effectiveness.

🔗 Learn More: https://ghost-x.org/docs/models/ghost-8b-beta
ghost-x/ghost-8b-beta-668ead6179f93be717db4542 Ghost 8B Beta
Put together a small repo showing how to go from making your own fine-tuning dataset w/ services like Groq & Together to publishing that model on ollama.

In my case I fine-tuned SmolLM-360M to be a better assistant for my Pi-Card (previous post) project.

Check it out!
https://github.com/nkasmanoff/ft-flow GitHub - nkasmanoff/ft-flow: Synthetic data to inference for LLM finetuning
ResShift 1-Click Windows, RunPod, Massed Compute, Kaggle Installers with Amazing Gradio APP and Batch Image Processing. ResShift is Efficient Diffusion Model for Image Super-resolution by Residual Shifting (NeurIPS 2023, Spotlight).


Official Repo : https://github.com/zsyOAOA/ResShift

I have developed a very advanced Gradio APP.

Developed APP Scripts and Installers : https://www.patreon.com/posts/110331752

Features

It supports following tasks:

Real-world image super-resolution

Bicubic (resize by Matlab) image super-resolution

Blind Face Restoration

Automatically saving all generated image with same name + numbering if necessary

Randomize seed feature for each generation

Batch image processing - give input and output folder paths and it batch process all images and saves

1-Click to install on Windows, RunPod, Massed Compute and Kaggle (free account)

Windows Requirements

Python 3.10, FFmpeg, Cuda 11.8, C++ tools and Git

If it doesn't work make sure to below tutorial and install everything exactly as shown in this below tutorial

https://youtu.be/-NjNy7afOQ0

How to Install on Windows

Make sure that you have the above requirements

Extract files into a folder like c:/reshift_v1

Double click Windows_Install.bat and it will automatically install everything for you with an isolated virtual environment folder (VENV)

After that double click Windows_Start_app.bat and start the app

When you first time use a task it will download necessary models (all under 500 MB) into accurate folders

If during download it fails, file gets corrupted sadly it doesn't verify that so delete files inside weights and restart

How to Install on RunPod, Massed Compute, Kaggle

Follow the Massed_Compute_Instructions_READ.txt and Runpod_Instructions_READ.txt

For Kaggle follow the notebook written steps

An example video of how to use my RunPod, Massed Compute scripts and Kaggle notebook can be seen

https://youtu.be/wG7oPp01COg https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/K7p-mZHsz0BrVH0_DyfDa.png GitHub - zsyOAOA/ResShift: ResShift: Efficient Diffusion Model for Image Super-resolution by Residual Shifting (NeurIPS@2023 Spotlight…
Click to view multi-image results on Mantis Eval, BLINK Val, Mathverse mv, Sciverse mv, MIRB.
Click to view video results on Video-MME and Video-ChatGPT.
Click to view few-shot results on TextVQA, VizWiz, VQAv2, OK-VQA.
Examples
https://github.com/OpenBMB/MiniCPM-V/raw/main/assets/minicpmv2_6/multi_img-bike.png
https://github.com/OpenBMB/MiniCPM-V/raw/main/assets/minicpmv2_6/multi_img-code.png
https://github.com/OpenBMB/MiniCPM-V/raw/main/assets/minicpmv2_6/ICL-Mem.png
Click to view more cases.
We deploy MiniCPM-V 2.6 on end devices. The demo video is the raw screen recording on a iPad Pro without edition.
https://github.com/OpenBMB/MiniCPM-V/raw/main/assets/gif_cases/ai.gif
https://github.com/OpenBMB/MiniCPM-V/raw/main/assets/gif_cases/ticket.gif
https://cdn-uploads.huggingface.co/production/uploads/64abc4aa6cadc7aca585dddf/mXAEFQFqNd4nnvPk7r5eX.mp4
Demo
Click here to try the Demo of MiniCPM-V 2.6.
https://cdn-uploads.huggingface.co/production/uploads/64abc4aa6cadc7aca585dddf/QVl0iPtT5aUhlvViyEpgs.png Single image results on OpenCompass, MME, MMVet, OCRBench, MMMU, MathVista, MMB, AI2D, TextVQA, DocVQA, HallusionBench, Object HalBench:

* We evaluate this benchmark using chain-of-thought prompting.

+ Token Density: number of pixels encoded into each visual token at maximum resolution, i.e., # pixels at maximum resolution / # visual tokens.

Note: For proprietary models, we calculate token density based on the image encoding charging strategy defined in the official API documentation, which provides an upper-bound estimation.

Click to view multi-image results on Mantis Eval, BLINK Val, Mathverse mv, Sciverse mv, MIRB.
Click to view video results on Video-MME and Video-ChatGPT.
Click to view few-shot results on TextVQA, VizWiz, VQAv2, OK-VQA.
Examples
💫 Easy Usage. MiniCPM-V 2.6 can be easily used in various ways: (1) llama.cpp and ollama support for efficient CPU inference on local devices, (2) int4 and GGUF format quantized models in 16 sizes, (3) vLLM support for high-throughput and memory-efficient inference, (4) fine-tuning on new domains and tasks, (5) quick local WebUI demo setup with Gradio and (6) online web demo.https://github.com/OpenBMB/MiniCPM-V/raw/main/assets/radar_final.png
🚀 Superior Efficiency. In addition to its friendly size, MiniCPM-V 2.6 also shows state-of-the-art token density (i.e., number of pixels encoded into each visual token). It produces only 640 tokens when processing a 1.8M pixel image, which is 75% fewer than most models. This directly improves the inference speed, first-token latency, memory usage, and power consumption. As a result, MiniCPM-V 2.6 can efficiently support real-time video understanding on end-side devices such as iPad.