Share and discover more about AI with social posts from the community.huggingface/OpenAi
Want to Learn How to write image prompts for Midjourney AI?
I've authored an e-book called "The Art of Midjourney AI: A Guide to Creating Images from Text".

📖 Read the e-book
Download ChatGPT Desktop App: macOS / Windows / Linux

ℹ️ NOTE: Sometimes, some of the prompts may not be working as you expected or may be rejected by the AI. Please try again, start a new thread, or log out and log back in. If these solutions do not work, please try rewriting the prompt using your own sentences while keeping the instructions same.

Want to Write Effective Prompts?
I've authored a free e-book called "The Art of ChatGPT Prompting: A Guide to Crafting Clear and Effective Prompts".

📖 Read the free e-book

Want to Learn How to Make Money using ChatGPT Prompts?
I've authored an e-book called "How to Make Money with ChatGPT: Strategies, Tips, and Tactics".

📖 Buy the e-book
NEW: Awesome ChatGPT Store: A Hub for Custom GPTs
Now you can access Awesome ChatGPT Store, a dynamic new addition to the ChatGPT ecosystem! With the introduction of customizable GPT models, our store provides a curated collection of specialized ChatGPT GPTs, each tailored for unique applications and use cases.

Explore a wide range of GPTs, from those optimized for specific programming languages, to models fine-tuned for creative writing, technical analysis, and more. This repository is not just a store; it's a community-driven platform where developers and enthusiasts can share, discover, and leverage the full potential of ChatGPT's versatility.

Dive into the world of customized conversational AI models and enrich your projects with cutting-edge technology. Visit the Awesome ChatGPT Store now at Awesome ChatGPT Store and start exploring the possibilities!
Awesome ChatGPT Prompts [CSV dataset]
This is a Dataset Repository of Awesome ChatGPT Prompts

View All Prompts on GitHub
How to install mistral inference ?
It is recommended to use mistralai/Mistral-Nemo-Instruct-2407 with mistral-inference. For HF transformers code snippets, please keep scrolling.

pip install mistral_inference

Download
from huggingface_hub import snapshot_download
from pathlib import Path

mistral_models_path = Path.home().joinpath('mistral_models', 'Nemo-Instruct')
mistral_models_path.mkdir(parents=True, exist_ok=True)

snapshot_download(repo_id="mistralai/Mistral-Nemo-Instruct-2407", allow_https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407 mistralai/Mistral-Nemo-Instruct-2407 · Hugging Face
Mistral Nemo is a transformer model, with the following architecture choices:

Layers: 40
Dim: 5,120
Head dim: 128
Hidden dim: 14,336
Activation Function: SwiGLU
Number of heads: 32
Number of kv-heads: 8 (GQA)
Vocabulary size: 2**17 ~= 128k
Rotary embeddings (theta = 1M)
Model Card for Mistral-Nemo-Instruct-2407
Key features
Released under the Apache 2 License
Pre-trained and instructed versions
Trained with a 128k context window
Trained on a large proportion of multilingual and code data
Drop-in replacement of Mistral 7B
# Cos Stable Diffusion XL 1.0 and Cos Stable Diffusion XL 1.0 Edit

Cos Stable Diffusion XL 1.0 Base is tuned to use a Cosine-Continuous EDM VPred schedule. The most notable feature of this schedule change is its capacity to produce the full color range from pitch black to pure white, alongside more subtle improvements to the model's rate-of-change to images across each step.

Edit Stable Diffusion XL 1.0 Base is tuned to use a Cosine-Continuous EDM VPred schedule, and then upgraded to perform instructed image editing. This model takes a source image as input alongside a prompt, and interprets the prompt as an instruction for how to alter the image.

## Usage
# Cos Stable Diffusion XL 1.0 and Cos Stable Diffusion XL 1.0 Edit

Cos Stable Diffusion XL 1.0 Base is tuned to use a Cosine-Continuous EDM VPred schedule. The most notable feature of this schedule change is its capacity to produce the full color range from pitch black to pure white, alongside more subtle improvements to the model's rate-of-change to images across each step.

Edit Stable Diffusion XL 1.0 Base is tuned to use a Cosine-Continuous EDM VPred schedule, and then upgraded to perform instructed image editing. This model takes a source image as input alongside a prompt, and interprets the prompt as an instruction for how to alter the image.

## Usage
These were all made using the SwarmUI lora extract tool

## 4thTail-v045-SDXL.safetensors
LoRA of 4th_tail_v0.4.5 extracted from Stable Diffusion XL 1.0 Base at rank 32.
note: broken

[4thTail-v045-SDXL.safetensors](https://huggingface.co/AshtakaOOf/lora-extract/resolve/main/sdxl/4thTail-v045-SDXL.safetensors?download=true)

## aidxl052-extract.safetensors
LoRA of animeIllustDiffusion_v052 extracted from Stable Diffusion XL 1.0 Base at rank 24.
note: Doesn't work
Better Alignment with Instruction Back-and-Forth Translation

Abstract
We propose a new method, instruction back-and-forth translation, to construct high-quality synthetic data grounded in world knowledge for aligning large language models (LLMs).

https://arxiv.org/abs/2408.04614 #LLM
Radxa Launches New Single-Board Computers Featuring Rockchip RK3588S2 and RK3582 Chips, Starting at $30
Radxa has announced the launch of its latest single-board computers (SBCs), the Radxa ROCK 5C and the Radxa ROCK 5C Lite. These credit card-sized devices are designed to cater to various computing needs, with prices starting at just $30 for the Lite version and $50 for the standard ROCK 5C. Both models are currently available for pre-order and are set to begin shipping on April 10th 2024. #RK3588S2 #Radxa
It is driven by a motor, with a height of 5 feet 6 inches and a weight of 70 kilograms.
Software and intelligence aspect:
It is equipped with an on-board visual language model (VLM), enabling it to perform rapid common-sense visual reasoning.
Compared to the previous generation product, the on-board computing and AI reasoning capabilities have tripled, allowing many real-world AI tasks to be executed completely independently.
It is equipped with a specially customized speech-to-speech reasoning model from the company's investor OpenAI. The default UI is speech, and it communicates with humans through the on-board microphone and speaker. #AI
Characteristics and capabilities of Figure 02:
Hardware aspect:
The appearance adopts an exoskeleton structure, integrating the power supply and computing power wiring inside the body, improving reliability and packaging compactness.
It is equipped with a fourth-generation hand device, with 16 degrees of freedom and strength comparable to that of humans, capable of carrying up to 25 kilograms of weight, and can flexibly perform various human-like tasks.
It has 6 RGB cameras (located on the head, chest and back respectively), and has "superhuman" vision.
The internal battery pack capacity has increased to 2.25 kWh. Its founder hopes that it can achieve an actual effective working time of more than 20 hours per day (but currently the official website shows that the battery life is only 5 hours. The 20 hours might be the inferred limit working time of "charging + working").
I've built a space for creating prompts for FLUX

gokaygokay/FLUX-Prompt-Generator


You can create long prompts from images or simple words. Enhance your short prompts with prompt enhancer. You can configure various settings such as artform, photo type, character details, scene details, style, and artist to create tailored prompts.

And you can combine all of them with custom prompts using llms (Mixtral, Mistral, Llama 3, and Mistral-Nemo).

The UI is a bit complex, but it includes almost everything you need. Choosing random option is the most fun!

And i've created some other spaces for using FLUX models with captioners and enhancers.

-
gokaygokay/FLUX.1-dev-with-Captioner

-
gokaygokay/FLUX.1-Schnell-with-Captioner
Results on TextVQA, DocVQA, OCRBench, OpenCompass MultiModal Avg , MME, MMBench, MMMU, MathVista, LLaVA Bench, RealWorld QA, Object HalBench.
💫 Easy Usage. MiniCPM-Llama3-V 2.5 can be easily used in various ways: (1) llama.cpp and ollama support for efficient CPU inference on local devices, (2) GGUF format quantized models in 16 sizes, (3) efficient LoRA fine-tuning with only 2 V100 GPUs, (4) streaming output, (5) quick local WebUI demo setup with Gradio and Streamlit, and (6) interactive demos on HuggingFace Spaces.
🚀 Efficient Deployment. MiniCPM-Llama3-V 2.5 systematically employs model quantization, CPU optimizations, NPU optimizations and compilation optimizations, achieving high-efficiency deployment on edge devices. For mobile phones with Qualcomm chips, we have integrated the NPU acceleration framework QNN into llama.cpp for the first time. After systematic optimization, MiniCPM-Llama3-V 2.5 has realized a 150-fold acceleration in multimodal large model end-side image encoding and a 3-fold increase in language decoding speed.