Share and discover more about AI with social posts from the community.huggingface/OpenAi
For all the Muslims out there who are interested in Quran and its tafsir (explanations). This humble dataset consists of 84 different books of tafsir for nearly all the ayat in the Quran:
MohamedRashad/Quran-Tafseer


I hope it helps someone to build something nice and useful with it ^_^
Anybody ever play Final Fantasy: Crystal Chronicles?
Like, *really* play it?

Mag Mell has been in my head recently. What a place that was.

Those cocoons looked like I could lay down inside of one, and it would be the most powerful sleep of a lifetime, with dreams that would last one thousand years, and I'd wake up with the wisdom of generations.

...Hey, anybody like text adventures?
Last Week in Medical AI: Top Research Papers/Models
🏅(September 7 - September 14, 2024)

🏅 Medical AI Paper of the week
Chai-1 Foundation model molecular structure prediction

Medical LLMs & Benchmarks
- BrainWave: A Brain Signal Foundation Model
- DS-ViT: Vision Transformer for Alzheimer’s Diagnosis
- EyeCLIP: Visual–language model for ophthalmic
- Segment Anything Model for Tumor Segmentation
- MEDIC: Evaluating LLMs in Clinical Applications

Medical LLM Applications
- KARGEN: Radiology Report Generation LLMs
- DrugAgent: Explainable Drug Repurposing Agents
- Improving RAG in Medicine with Follow-up Questions

Frameworks and Methodologies
- Infrastructure for Automatic Cell Segmentation
- Data Alignment for Dermatology AI
- Diagnostic Reasoning in Natural Language
- Two-Stage Instruction Fine-tuning Approach for Med

AI in Healthcare Ethics
- Concerns and Choices of Using LLMs for Healthcare
- Understanding Fairness in Recommender Systems
- Towards Fairer Health Recommendations

Check the full thread: https://x.com/OpenlifesciAI/status/1832476252260712788

Thank you for your continued support and love for this series! Stay up-to-date with weekly updates on Medical LLMs, datasets, and top research papers by following @aaditya 🤗
Trained Myself With 256 Images on FLUX — Results Mind Blowing

Detailed Full Workflow

Medium article : https://medium.com/@furkangozukara/ultimate-flux-lora-training-tutorial-windows-and-cloud-deployment-abb72f21cbf8

Windows main tutorial : https://youtu.be/nySGu12Y05k

Cloud tutorial for GPU poor or scaling : https://youtu.be/-uhL2nW7Ddw

Full detailed results and conclusions : https://www.patreon.com/posts/111891669

Full config files and details to train : https://www.patreon.com/posts/110879657

SUPIR Upscaling (default settings are now perfect) : https://youtu.be/OYxVEvDf284

I used my Poco X6 Camera phone and solo taken images

My dataset is far from being ready, thus I have used so many repeating and almost same images, but this was rather experimental

Hopefully I will continue taking more shots and improve dataset and reduce size in future

I trained Clip-L and T5-XXL Text Encoders as well

Since there was too much push from community that my workflow won’t work with expressions, I had to take a break from research and use whatever I have

I used my own researched workflow for training with Kohya GUI and also my own self developed SUPIR app batch upscaling with face upscaling and auto LLaVA captioning improvement

Download images to see them in full size, the last provided grid is 50% downscaled

Workflow

Gather a dataset that has expressions and perspectives that you like after training, this is crucial, whatever you add, it can generate perfect

Follow one of the LoRA training tutorials / guides

After training your LoRA, use your favorite UI to generate images

I prefer SwarmUI and here used prompts (you can add specific expressions to prompts) including face inpainting :

https://gist.github.com/FurkanGozukara/ce72861e52806c5ea4e8b9c7f4409672

After generating images, use SUPIR to upscale 2x with maximum resemblance

Short Conclusions

Using 256 images certainly caused more overfitting than necessary Ultimate FLUX LoRA Training Tutorial: Windows and Cloud Deployment
Researchers from Tencent have developed DepthCrafter, a novel method for generating temporally consistent long depth sequences for open-world videos using video diffusion models.

It leverages a pre-trained image-to-video diffusion model (SVD) as the foundation and uses a 3-stage training strategy on paired video-depth datasets:
1. Train on a large realistic dataset (1-25 frames)
2. Fine-tune temporal layers on realistic data (1-110 frames)
3. Fine-tune spatial layers on synthetic data (45 frames)

It adapts SVD's conditioning mechanism for frame-by-frame video input and employs latent diffusion in VAE space for efficiency.
Sprinkle some intelligent inference strategy for extremely long videos:
- Segment-wise processing (up to 110 frames)
- Noise initialization to anchor depth distributions
- Latent interpolation for seamless stitching

And outperforms SOTA methods on multiple datasets (Sintel, ScanNet, KITTI, Bonn).

Read here: https://depthcrafter.github.io
nanoGPT with Sigmoid Self-Attention
I couldn’t resist had to give it a try:)

Some observations on M2:
SSA was ~5-10% faster in training with similar final loss values, slightly less coherent text generation, marginally higher perplexity, and lower memory usage compared to softmax.

Code: https://github.com/Jaykef/ai-algorithms/blob/main/sigmoid_attn.ipynb ai-algorithms/sigmoid_attn.ipynb at main · Jaykef/ai-algorithms
How much VRAM will you need for training your AI model? 💾🧠
Check out this app where you convert:
Pytorch/tensorflow summary -> required VRAM
or
Parameter count -> required VRAM

Use it in: http://howmuchvram.com

And everything is open source! Ask for new functionalities or contribute in:
https://github.com/AlexBodner/How_Much_VRAM
If it's useful to you leave a star 🌟and share it to someone that will find the tool useful!
More discussion in: https://x.com/AlexBodner_/status/1832054850294812679
inflatebot/MN-12B-Mag-Mell-R1
https://huggingface.co/inflatebot/MN-12B-Mag-Mell-R1
MN-12B-Mag-Mell is a multi-stage merge, inspired by hypermerges like Tiefighter and Umbral Mind, intended for use as a general-purpose "Best of Nemo" model for co-writing, roleplay, and text adventures.

Consistently, Mag Mell produced prose that shocked testers, with a minimum of "slop". It also exhibited a unique sense of humor, and a propensity for inserting bespoke details into adventuring scenarios. inflatebot/MN-12B-Mag-Mell-R1 · Hugging Face
Have you tried the new SQL Console yet?

Would love to know any queries you've tried or general feedback! If you haven't go try it out and let us know 🤗

If you have some interesting queries feel free to share the URLs as well!
𝗘𝘅𝘁𝗿𝗮𝗰𝘁𝗶𝗻𝗴 𝘆𝗼𝘂𝗿 𝗛𝗧𝗠𝗟 𝘄𝗲𝗯𝗽𝗮𝗴𝗲𝘀 𝘁𝗼 𝗺𝗮𝗿𝗸𝗱𝗼𝘄𝗻 𝗶𝘀 𝗻𝗼𝘄 𝗽𝗼𝘀𝘀𝗶𝗯𝗹𝗲 𝗲𝗻𝗱-𝘁𝗼-𝗲𝗻𝗱 𝘄𝗶𝘁𝗵 𝗮 𝘀𝗶𝗺𝗽𝗹𝗲 𝗟𝗟𝗠! 👏

Jina just released Reader-LM, that handles the whole pipeline of extracting markdown from HTML webpages.

A while ago, Jina had released a completely code-based deterministic program to do this extraction, based on some heuristics : e.g., “if the text is in a <p> tag, keep it, but if it’s hidden behind another, remove it”.

🤔 But they received complaints from readers: some found it too detailed, other not enough, depending on the pages.

➡️ So they decided, 𝗺𝗮𝘆𝗯𝗲 𝗵𝗲𝘂𝗿𝗶𝘀𝘁𝗶𝗰𝘀 𝘄𝗲𝗿𝗲 𝗻𝗼𝘁 𝗲𝗻𝗼𝘂𝗴𝗵: 𝗶𝗻𝘀𝘁𝗲𝗮𝗱, 𝘁𝗵𝗲𝘆 𝘁𝗿𝗶𝗲𝗱 𝘁𝗼 𝘁𝗿𝗮𝗶𝗻 𝗮 𝗟𝗟𝗠 𝘁𝗼 𝗱𝗼 𝘁𝗵𝗲 𝗰𝗼𝗺𝗽𝗹𝗲𝘁𝗲 𝗲𝘅𝘁𝗿𝗮𝗰𝘁𝗶𝗼𝗻. This LLM does not need to be very strong,but it should handle a very long context: it’s a challenging, “shallow-but-wide” architecture.

𝗧𝗲𝗰𝗵𝗻𝗶𝗰𝗮𝗹 𝗶𝗻𝘀𝗶𝗴𝗵𝘁𝘀:
2️⃣ models: Reader-LM-0.5B and 1.5B
⚙️ Two stages of training: first, short and simple HTML to get the basics, then ramp up to longer and harder HTML up to 128k tokens
🔎 Use contrastive search for decoding: this empirically reduces “repeating output” issues
➡️ Their models beat much larger models at HTML extraction 🔥
🤗 Weights available on HF (sadly cc-by-nc license):
jinaai/reader-lm-1.5b
Hugging face presents FineVideo 🎥! Unlocking the next generation of Video understanding 🚀

🤯3400 hours of annotated Creative Common videos with rich character descriptions, scene splits, mood, and content descriptions per scene as well as QA pairs.
🔥
@mfarre processed over 2M videos of Youtube-CC to make this incredibly powerful selection.

Very psyched to fine-tune idefics on this dataset. ⚡️
Explore the videos:
HuggingFaceFV/FineVideo-Explorer
Hugging face presents FineVideo 🎥! Unlocking the next generation of Video understanding 🚀

🤯3400 hours of annotated Creative Common videos with rich character descriptions, scene splits, mood, and content descriptions per scene as well as QA pairs.
🔥
@mfarre processed over 2M videos of Youtube-CC to make this incredibly powerful selection.

Very psyched to fine-tune idefics on this dataset. ⚡️
Explore the videos:
HuggingFaceFV/FineVideo-Explorer
𝐎𝐩𝐞𝐧𝐀𝐈 𝐟𝐢𝐧𝐚𝐥𝐥𝐲 𝐫𝐞𝐯𝐞𝐚𝐥𝐬 “🍓”: 𝐜𝐫𝐚𝐳𝐲 𝐜𝐡𝐚𝐢𝐧-𝐨𝐟-𝐭𝐡𝐨𝐮𝐠𝐡𝐭-𝐭𝐮𝐧𝐞𝐝 𝐦𝐨𝐝𝐞𝐥 >> 𝐆𝐏𝐓-𝟒𝐨 💥

OpenAI had hinted at a mysterious “project strawberry” for a long time: 𝘁𝗵𝗲𝘆 𝗽𝘂𝗯𝗹𝗶𝘀𝗵𝗲𝗱 𝘁𝗵𝗶𝘀 𝗻𝗲𝘄 𝗺𝗼𝗱𝗲𝗹 𝗰𝗮𝗹𝗹𝗲𝗱 “𝗼𝟭” 𝟭𝗵𝗼𝘂𝗿 𝗮𝗴𝗼, 𝗮𝗻𝗱 𝘁𝗵𝗲 𝗽𝗲𝗿𝗳𝗼𝗿𝗺𝗮𝗻𝗰𝗲 𝗶𝘀 𝗷𝘂𝘀𝘁 𝗺𝗶𝗻𝗱-𝗯𝗹𝗼𝘄𝗶𝗻𝗴.

🤯 Ranks among the top 500 students in the US in a qualifier for the USA Math Olympiad
🤯 Beats human-PhD-level accuracy by 8% on GPQA, hard science problems benchmark where the previous best was Claude 3.5 Sonnet with 59.4.
🤯 Scores 78.2% on vision benchmark MMMU, making it the first model competitive w/ human experts
🤯 GPT-4o on MATH scored 60% ⇒ o1 scores 95%

How did they pull this? Sadly OpenAI keeps increasing their performance in “making cryptic AF reports to not reveal any real info”, so here are excerpts:

💬 “𝗼𝟭 𝘂𝘀𝗲𝘀 𝗮 𝗰𝗵𝗮𝗶𝗻 𝗼𝗳 𝘁𝗵𝗼𝘂𝗴𝗵𝘁 𝘄𝗵𝗲𝗻 𝗮𝘁𝘁𝗲𝗺𝗽𝘁𝗶𝗻𝗴 𝘁𝗼 𝘀𝗼𝗹𝘃𝗲 𝗮 𝗽𝗿𝗼𝗯𝗹𝗲𝗺. 𝗧𝗵𝗿𝗼𝘂𝗴𝗵 𝗿𝗲𝗶𝗻𝗳𝗼𝗿𝗰𝗲𝗺𝗲𝗻𝘁 𝗹𝗲𝗮𝗿𝗻𝗶𝗻𝗴, 𝗼𝟭 𝗹𝗲𝗮𝗿𝗻𝘀 𝘁𝗼 𝗵𝗼𝗻𝗲 𝗶𝘁𝘀 𝗰𝗵𝗮𝗶𝗻 𝗼𝗳 𝘁𝗵𝗼𝘂𝗴𝗵𝘁 𝗮𝗻𝗱 𝗿𝗲𝗳𝗶𝗻𝗲 𝘁𝗵𝗲 𝘀𝘁𝗿𝗮𝘁𝗲𝗴𝗶𝗲𝘀 𝗶𝘁 𝘂𝘀𝗲𝘀. It learns to recognize and correct its mistakes.”

And of course, they decide to hide the content of this precious Chain-of-
Thought. Would it be for maximum profit? Of course not, you awful capitalist, it’s to protect users:

💬 “We also do not want to make an unaligned chain of thought directly visible to users.”

They’re right, it would certainly have hurt my feelings to see the internal of this model tearing apart math problems.

🤔 I suspect it could be not only CoT, but also some agentic behaviour where the model can just call a code executor. The kind of score improvement the show certainly looks like the ones you see with agents.

This model will be immediately released for ChatGPT and some “trusted API users”.

Let’s start cooking to release the same thing in 6 months! 🚀
I believe Hugging Face should have something similar to Hacktoberfest. I miss the days when there were events like this every 3 months for audio, deep reinforcement learning, gradio themes, but it turns out everything slowed down. There are no more Hugging Face events.
📢 The Three-hop (💡aspect + 🤔opinion + 🧠reason) Chain-of-Thought concept + LLM represent a decent concept for reasoning emotions of participants in textual dialogues.
Delighted to share the tutorial video which make you aware of:
The proper application of LLM towards implicit IR
Ways for aligning different information types (causes and states) within the same LLM
Launch your LLM in GoogleColab that is capable for characters Emotion Extraction in dialogues 🧪

🎥: https://www.youtube.com/watch?v=vRVDQa7vfkU

Project: https://github.com/nicolay-r/THOR-ECAC
Paper: https://aclanthology.org/2024.semeval-1.4/
Model card:
nicolay-r/flan-t5-emotion-cause-thor-base
The Romulus model series has been released on Hugging Face, continually pre-trained on 34,864,949 tokens of French laws and intended to serve as a foundation for fine-tuning on labeled data 🤗

The training code, dataset and model weights are open and available free on HF and the training was based on H100 provided by Microsoft for Startups using Unsloth AI by @danielhanchen and @shimmyshimmer 🦥

Link to the base model:
louisbrulenaudet/Romulus-cpt-Llama-3.1-8B-v0.1


Link to the instruct model:
louisbrulenaudet/Romulus-cpt-Llama-3.1-8B-v0.1-Instruct


Link to the dataset:
louisbrulenaudet/Romulus-cpt-fr


Please note that these models have not been aligned for the production of usable texts as they stand, and will certainly need to be refined for the desired tasks in order to produce satisfactory results.https://cdn-uploads.huggingface.co/production/uploads/6459fa0f5b3111fbe83286e1/n_KKbhGEDZg-2NMBu3OGo.jpeg
> 𝗪𝗮𝗻𝘁 𝘁𝗼 𝗸𝗻𝗼𝘄 𝗵𝗼𝘄 𝗺𝘂𝗰𝗵 𝗮𝗻 𝗔𝗣𝗜 𝗟𝗟𝗠 𝗰𝗮𝗹𝗹 𝗰𝗼𝘀𝘁𝘀 𝘆𝗼𝘂?

I've just made this Space that gets you the API price for any LLM call, for nearly all inference providers out there!

This is based on a comment by @victor under my HF Post a few months back, and leverages BerriAI's data for LLM prices.

Check it out here 👉
m-ric/text_to_dollars
Auto-regressive LMs have ruled, but encoder-based architectures like GLiNER are proving to be just as powerful for information extraction while offering better efficiency and interpretability. 🔍

Past encoder backbones were limited by small pre-training datasets and old techniques, but with innovations like LLM2Vec, we've transformed decoders into high-performing encoders! 🔄💡

What’s New?
🔹Converted Llama & Qwen decoders to advanced encoders
🔹Improved GLiNER architecture to be able to work with rotary positional encoding
🔹New GLiNER (zero-shot NER) & GLiClass (zero-shot classification) models

🔥 Check it out:

New models:
knowledgator/llm2encoder-66d1c76e3c8270397efc5b5e


GLiNER package: https://github.com/urchade/GLiNER

GLiClass package: https://github.com/Knowledgator/GLiClass

💻 Read our blog for more insights, and stay tuned for what’s next!
https://medium.com/@knowledgrator/llm2encoders-e7d90b9f5966 GitHub - urchade/GLiNER: Generalist and Lightweight Model for Named Entity Recognition (Extract any entity types from texts) @…
Free research tip:
Get used to writing the first draft of your paper in markdown using vscode’s jupyter notebook extension - it lets you do quick sanity checks with code and maths - an absolute AAA experience:)
made an image similarity demo to test out the
mistral-community/pixtral-12b-240910
model .

If anyone knows how to generate captions with it , please do let me know x 🚀

here's the demo :
Tonic/Pixtral


hope you like it 🤗