HF-hub - Share and discover more about AI with social posts from the community.huggingface/OpenAi
Share and discover more about AI with social posts from the community.huggingface/OpenAi
works best when you include 'ps1 game screenshot' in prompt

late 90s/early 2000s ps1/n64 console graphics.

5000 steps

trained on 15 gpt4o captioned and adjusted ps1/n64 game screenshots using https://github.com/ostris/ai-toolkit/tree/main

Trigger words
You should use ps1 to trigger the image generation.

Download model
Weights for this model are available in Safetensors format.

Download them in the Files & versions tab.

Use it with the 🧨 diffusers library
flux-dreambooth-lora
This is a LoRA derived from black-forest-labs/FLUX.1-dev.

Two subjects were trained in, a character named Julia (AI) and a real person named River Phoenix.

Empirically, training two subjects in simultaneously kept the model from collapsing, though they don't train evenly - River Phoenix took longer than "Julia", possibly due to the synthetic nature of the data.

The photos of "Julia" came from Flux Pro. River Phoenix images were pulled from Google Image Search, with a focus on high resolution, high quality samples.

No captions were used during training, only instance prompts julia and river phoenix.

The main validation prompt used during training was:

julie, in photograph style

Validation settings
CFG: 3.0
CFG Rescale: 0.0
Steps: 28
Sampler: None
Seed: 420420420
Resolution: 1024x1024
Note: The validation settings are not necessarily the same as the training settings.

You can find some example images in the following gallery:
FLUX.1-[dev] Panorama LoRA (v2)
A LoRA model to generate panoramas using Flux dev.

Which image ratio to use?
This model has been trained on images with a 2:1 ratio (2048x1024).

So you might get good results is you use this for width and height.

However panorama viewers are pretty flexible when it comes to resolution, and FLUX.1 seems to generalize well.

For instance the gallery samples have been generated in 1536 × 640 (~21:9), since this is reasonably fast (16 sec on the HF Inference API).

It doesn't work for case X or Y
It usually work fine for "normal" requests, but the model might have trouble creating the spherical distortion if you ask for uncommon content, locations or angles.

If you give me some examples, maybe I can try to find more data to better cover uncommon panoramas.

Non-commercial use
As the base model is FLUX.1-[dev] and since the data comes from Google Street View, it should be used for non-commercial, personal or demonstration purposes only.

Please use it responsibly, thank you!
Here is my first crack at a realism Flux model, and I guess the first realism model I've ever shared publicly.

All the training data was opensource/open license photographs. I do intend to revisit it and expand and improve on it. It can benefit from emphasizing the style by adding "vintage" or "faded film" to the prompt.

Big appreciation to Glif for sponsoring the model!
Flux Dev 1 model that creates images with both photographic and illustrative elements. Pretty cool right? I've trained this model on a curated collection of images gathered from Pinterest. I am not the creator of the original art used for the fine-tuned training, but want to provide access for creative exploration. You can run the model for yourself or train your own on Replicate. Let me know your thoughts and have a wonderful day!

https://replicate.com/lucataco/ai-toolkit/readme
https://replicate.com/lucataco/flux-dev-lora

Trigger words
It would help if you used in the style of TOK for better style preservation. It is best to place the trigger words first and describe illustrative elements in your scene like clothing or expressive elements.

Download model
Download the *.safetensors LoRA in the Files & versions tab.

Use it with the 🧨 diffusers library lucataco/ai-toolkit – Replicate
Stable-Diffusion: Optimized for Mobile Deployment
State-of-the-art generative AI model used to generate detailed images conditioned on text descriptions
Generates high resolution images from text prompts using a latent diffusion model. This model uses CLIP ViT-L/14 as text encoder, U-Net based latent denoising, and VAE based decoder to generate the final image.

This model is an implementation of Stable-Diffusion found here. This repository provides scripts to run Stable-Diffusion on Qualcomm® devices. More details on model performance across various devices, can be found here.https://huggingface.co/qualcomm/Stable-Diffusion qualcomm/Stable-Diffusion · Hugging Face
Score-Based Generative Modeling through Stochastic Differential Equations (SDE)
Paper: Score-Based Generative Modeling through Stochastic Differential Equations

Authors: Yang Song, Jascha Sohl-Dickstein, Diederik P. Kingma, Abhishek Kumar, Stefano Ermon, Ben Poole

Abstract:

Creating noise from data is easy; creating data from noise is generative modeling. We present a stochastic differential equation (SDE) that smoothly transforms a complex data distribution to a known prior distribution by slowly injecting noise, and a corresponding reverse-time SDE that transforms the prior distribution back into the data distribution by slowly removing the noise. Crucially, the reverse-time SDE depends only on the time-dependent gradient field (\aka, score) of the perturbed data distribution. By leveraging advances in score-based generative modeling, we can accurately estimate these scores with neural networks, and use numerical SDE solvers to generate samples. We show that this framework encapsulates previous approaches in score-based generative modeling and diffusion probabilistic modeling, allowing for new sampling procedures and new modeling capabilities. In particular, we introduce a predictor-corrector framework to correct errors in the evolution of the discretized reverse-time SDE. We also derive an equivalent neural ODE that samples from the same distribution as the SDE, but additionally enables exact likelihood computation, and improved sampling efficiency. In addition, we provide a new way to solve inverse problems with score-based models, as demonstrated with experiments on class-conditional generation, image inpainting, and colorization. Combined with multiple architectural improvements, we achieve record-breaking performance for unconditional image generation on CIFAR-10 with an Inception score of 9.89 and FID of 2.20, a competitive likelihood of 2.99 bits/dim, and demonstrate high fidelity generation of 1024 x 1024 images for the first time from a score-based generative model.
keras-io/denoising-diffusion-implicit-models
This model was created for the Keras code example on denoising diffusion implicit models (DDIM).

Model description
The model uses a U-Net with identical input and output dimensions. It progressively downsamples and upsamples its input image, adding skip connections between layers having the same resolution. The architecture is a simplified version of the architecture of DDPM. It consists of convolutional residual blocks and lacks attention layers. The network takes two inputs, the noisy images and the variances of their noise components, which it encodes using sinusoidal embeddings.

Intended uses & limitations
The model is intended for educational purposes, as a simple example of denoising diffusion generative models. It has modest compute requirements with reasonable natural image generation performance.

Training and evaluation data
The model is trained on the Oxford Flowers 102 dataset for generating images, which is a diverse natural dataset containing around 8,000 images of flowers. Since the official splits are imbalanced (most of the images are contained in the test splite), new random splits were created (80% train, 20% validation) for training the model. Center crops were used for preprocessing.

Training procedure
The model is trained to denoise noisy images, and can generate images by iteratively denoising pure Gaussian noise.

For more details check out the Keras code example, or the companion code repository, with additional features..
Simple DCGAN implementation in TensorFlow to generate CryptoPunks.

Generated samples
Project repository: CryptoGANs.

Usage
You can play with the HuggingFace space demo.

Or try it yourself https://huggingface.co/huggan/crypto-gan huggan/crypto-gan · Hugging Face
Training details
XLabs AI team is happy to publish fune-tuning Flux scripts, including:

LoRA 🔥
ControlNet 🔥
See our github for train script and train configs.

Training Dataset
Dataset has the following format for the training process:

├── images/
│ ├── 1.png
│ ├── 1.json
│ ├── 2.png
│ ├── 2.json
│ ├── ...

A .json file contains "caption" field with a text prompt.

Inference
python3 demo_lora_inference.py \
--checkpoint lora.safetensors \
--prompt " handsome girl in a suit covered with bold tatt
Introducing HelpingAI2-9B, an emotionally intelligent LLM.
Model Link :
OEvortex/HelpingAI2-9B

Demo Link:
Abhaykoul/HelpingAI2


This model is part of the innovative HelpingAI series and it stands out for its ability to engage users with emotional understanding.

Key Features:
-----------------

* It gets 95.89 score on EQ Bench greather than all top notch LLMs, reflecting advanced emotional recognition.
* It gives responses in empathetic and supportive manner.

Must try our demo:
Abhaykoul/HelpingAI2 https://huggingface.co/spaces/Abhaykoul/HelpingAI2 HelpingAI2 - a Hugging Face Space by Abhaykoul
Welcome to the Coolify Self-Host Installation Guide!

I'm excited to help you learn how to install Coolify self-host on your server. This guide is designed for high school students, so don't worry if you're new to server management or coding.

What is Coolify? Coolify is an open-source platform that allows you to self-host your applications, databases, and services without managing your servers. It's like a Heroku or Netlify alternative, but self-hosted on your own server.

System Requirements Before we begin, make sure you have the following:

A server with SSH access (e.g., VPS, Raspberry Pi, or any other server you have SSH access to)
A Debian-based Linux distribution (e.g., Debian, Ubuntu) or a Redhat-based Linux distribution (e.g., CentOS, Fedora)
At least 2 CPUs, 2 GB of memory, and 30+ GB of storage
Step 1: Choose Your Server Resources When choosing your server resources, consider the following:

If you plan to run a lot of applications, you may need more resources (e.g., more CPUs, memory, and storage)
If you're hosting a static site, you may need fewer resources
If you're hosting a database or a service like WordPress, you may need more resources
Step 2: Install Coolify There are two ways to install Coolify:

Automated Installation (Recommended)
This method uses a script to install Coolify on your server.

Open a terminal on your server and run the following command:
curl -fsSL https://cdn.coollabs.io/coolify/install.sh | bash
This script will install the required dependencies, configure logging, and create a directory structure for Coolify.

Once the script finishes, you'll see a message indicating that Coolify has been installed successfully.
Manual Installation (For Advanced Users)
This method requires you to install Docker Desktop on your Windows machine and then configure Coolify manually.

Install Docker Desktop on your Windows machine.
Create a directory to hold your Coolify-related data (e.g., C:\Users\yourusername\coolify).
Copy the docker-compose.windows.yml and .env.windows-docker-desktop.example files to the directory you created.
Rename the files to docker-compose.yml and .env.
Create a Coolify network with the command docker network create coolify.
Start Coolify with the command docker compose up.
What's Next? Once you've installed Coolify, you can access it on port localhost:8000 of your machine. You'll see a simple and easy-to-use UI to manage your servers and applications.

Quiz Time! Before we move on, let's make sure you understand the basics of Coolify self-host installation.

What is the recommended method for installing Coolify?

Automated Installation
Manual Installation
Both methods are equally recommended
Please respond with the number of your chosen answer.https://coolify.io/ Coolify
What is AI code?

AI code refers to the programming languages, algorithms, and techniques used to create artificial intelligence (AI) systems. AI code can be written in various programming languages, such as Python, Java, C++, and R.

Types of AI code:

Machine Learning (ML) code: ML is a subset of AI that involves training algorithms on data to make predictions or decisions. ML code is used to develop models that can learn from data, such as neural networks, decision trees, and clustering algorithms.
Deep Learning (DL) code: DL is a type of ML that uses neural networks with multiple layers to analyze data. DL code is used to develop models that can recognize patterns in images, speech, and text.
Natural Language Processing (NLP) code: NLP is a subset of AI that deals with the interaction between computers and human language. NLP code is used to develop models that can understand, generate, and process human language.
Computer Vision code: Computer vision is a subset of AI that deals with the interpretation and understanding of visual data from images and videos. Computer vision code is used to develop models that can recognize objects, detect faces, and track movements.
https://hf.co/chat/assistant/66bf4bae8e085ef84feb900b
AI Devin Dev - Your Programming Assistant
Enter your programming questions and get expert answers
Image Gen +
Generate Images in HD, BULK and With Simple Prompts for FREE.
Created by KingNish

https://huggingface.co/chat/assistant/6612cb237c1e770b75c5ebad
HuggingAssist
HuggingAssist is a LLM-powered assistant specialized in the HuggingFace ecosystem, offering guidance on libraries like Transformers, Datasets ...
Created by Ali-C137

https://huggingface.co/chat/assistant/65bd0adc08560e58be454d86
Professor GPT
This is an AI who acts as a college professor and personal tutor.
Created by DavidMcKay

https://huggingface.co/chat/assistant/65bfef86731d14eb43fb66d9
Stable Diffusion Image Prompt Generator
An expert in crafting intricate prompts for the generative AI 'Stable Diffusion', ensuring top-tier image generation.
Created by tintwotin
https://huggingface.co/chat/assistant/65d32610a28805eeae6824c7
Clone of Hugging Face CTO
Trying to scale my productivity by cloning myself. Please talk with me!
Created by julien-c
https://huggingface.co/chat/assistant/65b26737e9ccc6d0853dc16f
Talk to Marcus Aurelius
He might've lived a long ago but he can still give good advice.
Created by merve

https://huggingface.co/chat/assistant/65bfed22022ba290531112f8