HF-hub - Share and discover more about AI with social posts from the community.huggingface/OpenAi
Share and discover more about AI with social posts from the community.huggingface/OpenAi
๐Ÿ™‹๐Ÿปโ€โ™‚๏ธhey there folks ,

โœ’๏ธInkubaLM has been trained from scratch using 1.9 billion tokens of data for five African languages, along with English and French data, totaling 2.4 billion tokens of data. It is capable of understanding and generating content in five African languages: Swahili, Yoruba, Hausa, isiZulu, and isiXhosa, as well as English and French.

model
lelapa/InkubaLM-0.4B

demo
Tonic/Inkuba-0.4B
Spent a few minutes to build an alternative to Character AI on top of llama3.1 405B through SambaNova's super fast inference API

Space:
kz919/Persona-AI

API referral link: https://sambanova.ai/fast-api?api_ref=907266
I started training a public LoRA style (2 seperate training each on 4x A6000).

Experimenting captions vs non-captions. So we will see which yields best results for style training on FLUX.

Generated captions with multi-GPU batch Joycaption app.

I am showing 5 examples of what Joycaption generates on FLUX dev. Left images are the original style images from the dataset.

I used my multi-GPU Joycaption APP (used 8x A6000 for ultra fast captioning) : https://www.patreon.com/posts/110613301

I used my Gradio batch caption editor to edit some words and add activation token as ohwx 3d render : https://www.patreon.com/posts/108992085

The no caption dataset uses only ohwx 3d render as caption

I am using my newest 4x_GPU_Rank_1_SLOW_Better_Quality.json on 4X A6000 GPU and train 500 epochs โ€” 114 images : https://www.patreon.com/posts/110879657

Total step count is being 500 * 114 / 4 (4x GPU โ€” batch size 1) = 14250

Taking 37 hours currently if I donโ€™t terminate early

Will save a checkpoint once every 25 epochs

Full Windows Kohya LoRA training tutorial : https://youtu.be/nySGu12Y05k

Full cloud tutorial I am still editing

Hopefully will share trained LoRA on Hugging Face and CivitAI along with full dataset including captions.

I got permission to share dataset but canโ€™t be used commercially.

Also I will hopefully share full workflow in the CivitAI and Hugging Face LoRA pages. JoyCaption : Amazing Image Captioner - Batch Processing and Multi-GPU - Windows, RunPod, Massed Compute 1-Click Installers | SECourses:โ€ฆ
# Excited to Share: New LLM Tokenization - Convert Text to tokens and vice versa! ๐Ÿš€

I've just developed a powerful tool for anyone working with Language Models (LLMs) or diving into Natural Language Processing (NLP).

๐Ÿ” Introducing the LLM Tokenization - Convert Text to tokens and vice versa!!

Key Features:
- Convert text to tokens and token IDs
- Reverse engineer: convert token IDs back to text
- Support for popular models: LLama3 (Will add more models iteratively)
- User-friendly Gradio interface for easy interaction

Whether you're debugging your NLP pipeline, exploring how different models tokenize text, or just curious about the inner workings of LLMs, this tool is for you!

๐Ÿ‘ฉโ€๐Ÿ’ป Tech Stack:
- Python
- Gradio for the web interface
- Hugging Face Transformers for tokenization

The application is deployed in Hugging Face spaces as Gradio application

๐Ÿ”— Try it out: https://lnkd.in/g6R5z9k2

#NLP #MachineLearning #AI #PythonDevelopment #OpenSourceAI
๐๐ž๐ฐ ๐‘๐ž๐ฅ๐ž๐š๐ฌ๐ž: ๐Œ๐š๐ฃ๐จ๐ซ ๐“๐Ž๐Œ ๐ƒ๐ข๐ ๐ข๐ญ๐š๐ฅ ๐„๐ฅ๐ž๐ฏ๐š๐ญ๐ข๐จ๐ง ๐Œ๐จ๐๐ž๐ฅ ๐„๐ฑ๐ฉ๐š๐ง๐ฌ๐ข๐จ๐ง ๐Ÿ—บ

Dataset:
Major-TOM/Core-DEM


Today with European Space Agency - ESA and Adobe Research, we release a global expansion to Major TOM with GLO-30 DEM data.

You can now instantly access nearly 2M of Major TOM samples with elevation data to build your next AI model for EO. ๐ŸŒ

๐Ÿ” Browse the data in our usual viewer app:
Major-TOM/MajorTOM-Core-Viewer


Fantastic work championed by Paul Borne--Pons @NewtNewt ๐Ÿš€
๐Œ๐ฒ ๐Ÿ๐ข๐ซ๐ฌ๐ญ ๐œ๐จ๐ฆ๐ฆ๐ฎ๐ง๐ข๐ญ๐ฒ ๐š๐ซ๐ญ๐ข๐œ๐ฅ๐ž! ๐’๐ž๐ฅ๐ž๐œ๐ญ๐ข๐ฏ๐ž ๐Ÿ๐ข๐ง๐ž-๐ญ๐ฎ๐ง๐ข๐ง๐  ๐ฐ๐ข๐ญ๐ก ๐’๐ฉ๐ž๐œ๐ญ๐ซ๐ฎ๐ฆ ๐ŸŽฏ

Full walkthrough on how to get started with Spectrum and TRL for efficient fine-tuning.
๐Ÿ“” ๐Ÿ‘ฃ https://huggingface.co/blog/anakin87/spectrum

---

Looking to fine-tune Language Models efficiently and save on computational resources?

One popular method is QLoRa, which quantizes the original model and trains low-rank adapters on top.
It's quite effective and uses less GPU than full fine-tuning.

However, QLoRa applies Low-Rank Adaptation uniformly across the entire model.

What if we could identify the most informative layers and only fine-tune those? ๐Ÿค”

This is exactly what Spectrum does! ๐Ÿ‘‡

๐Ÿ”ฌ Spectrum analyzes the weight matrices for all layers in a Language Model and calculates a Signal to Noise Ratio (SNR) for each one.
(It uses Random Matrix Theory and Marchenko-Pastur distribution to distinguish signal from noise.)

๐ŸŽฏ Based on a chosen percentage (say, 25%), Spectrum selects the most informative layers of each type (mlp.down_proj, self_attn.o_proj, etc.).

You can then โ„๏ธ freeze the rest of the model and focus your ๐Ÿ‹๏ธโ€โ™‚๏ธ training on the chosen layers.


๐Ÿ† Results/Evaluation
- Spectrum is competitive with full fine-tuning and beats QLoRA on benchmarks.
- While QLoRA is more memory-efficient on a single GPU, Spectrum shines in distributed training setups.
- Great models trained with Spectrum: Dolphin models, Llama 3.1 Storm, numerous models by VAGO Solutions...

---

For a practical guide, check out the article above. Selective fine-tuning of Language Models with Spectrum
The Forward-Forward Algorithm๐Ÿค–

FFA replaces the forward and backward passes in backpropagtion with two forward passes - one with positive (real) data and another with negative data. Each layer has its objective function - to increase or decrease a โ€œgoodness" metric. The positive pass uses real data and adjusts weights to increase โ€œgoodnessโ€ in every hidden layer. The negative pass does the opposite.

I must say reading&Implementing a godfather paper feels quite fulfilling:)
Thank you Prof. Geoffrey Hinton.

Code: https://github.com/Jaykef/ai-algorithms/blob/main/mnist_the_forward_forward_algor
Is AIโ€™s impact on elections being overblown? Three researchers think so in this opinion piece published in the MIT Tech Review.

Highlights:

โ€ข โ€œAI is being used to try to influence electoral processes, but these efforts have not been fruitful.โ€
โ€ข โ€œWhy were these initial speculations about AI-enabled electoral interference so off (โ€ฆ) ? The short answer: Because they ignored decades of research on the limited influence of mass persuasion campaigns, the complex determinants of voting behaviors, and the indirect and human-mediated causal role of technology.โ€
โ€ข โ€œYet we should remember that thereโ€™s a cost to overreaction based on ill-founded assumptions, especially when other critical issues go unaddressed.โ€

๐Ÿ‘‰Read more here: https://technologyreview.com/2024/09/03/1103464/ai-impact-elections-overblown/ AIโ€™s impact on elections is being overblown
๐Ÿšจ ๐—›๐˜‚๐—บ๐—ฎ๐—ป ๐—™๐—ฒ๐—ฒ๐—ฑ๐—ฏ๐—ฎ๐—ฐ๐—ธ ๐—ณ๐—ผ๐—ฟ ๐—”๐—œ ๐˜๐—ฟ๐—ฎ๐—ถ๐—ป๐—ถ๐—ป๐—ด: ๐—ก๐—ผ๐˜ ๐˜๐—ต๐—ฒ ๐—ด๐—ผ๐—น๐—ฑ๐—ฒ๐—ป ๐—ด๐—ผ๐—ผ๐˜€๐—ฒ ๐˜„๐—ฒ ๐˜๐—ต๐—ผ๐˜‚๐—ด๐—ต๐˜?

Iโ€™ve just read a great paper where Cohere researchers raises significant questions about using Human feedback to evaluate AI language models.

Human feedback is often regarded as the gold standard for judging AI performance, but it turns out, it might be more like fool's gold : the study reveals that our human judgments are easily swayed by factors that have nothing to do with actual AI performance.

๐—ž๐—ฒ๐˜† ๐—ถ๐—ป๐˜€๐—ถ๐—ด๐—ต๐˜๐˜€:
๐Ÿง  Test several models: Llama-2, Falcon-40B, Cohere Command 6 and 52B ๐Ÿ™…โ€โ™‚๏ธ Refusing to answer tanks AI ratings more than getting facts wrong. We apparently prefer a wrong answer to no answer!

๐Ÿ’ช Confidence is key (even when it shouldn't be): More assertive AI responses are seen as more factual, even when they're not. This could be pushing AI development in the wrong direction, with systems like RLHF.

๐ŸŽญ The assertiveness trap: As AI responses get more confident-sounding, non-expert annotators become less likely to notice when they're wrong or inconsistent.

And a consequence of the above:
๐Ÿ”„ ๐—ฅ๐—Ÿ๐—›๐—™ ๐—บ๐—ถ๐—ด๐—ต๐˜ ๐—ฏ๐—ฎ๐—ฐ๐—ธ๐—ณ๐—ถ๐—ฟ๐—ฒ: Using human feedback to train AI (Reinforcement Learning from Human Feedback) could accidentally make AI more overconfident and less accurate.

This paper means we need to think carefully about how we evaluate and train AI systems to ensure we're rewarding correctness over apparences of it like confident talk.

โ›”๏ธ Chatbot Arenaโ€™s ELO leaderboard, based on crowdsourced answers from average joes like you and me, might become completely irrelevant as models will become smarter and smarter.

Read the paper ๐Ÿ‘‰
Human Feedback is not Gold Standard (2309.16349)https://huggingface.co/papers/2309.16349 Paper page - Human Feedback is not Gold Standard
Hyperfast Contextual Custom LLM with Agents, Multitokens, Explainable AI, and Distillation https://mltblog.com/4dNPSnB

New additions to this ground-breaking system include multi-token distillation when processing prompts, agents to meet user intent, more NLP, and a command prompt menu accepting both standard prompts and various actions.

I also added several illustrations, featuring xLLM in action with a full session and sample commands to fine-tune in real-time. All the code, input sources (anonymized corporate corpus from fortune 100 company), contextual backend tables including embeddings, are on GitHub. My system has zero weight, no transformer, and no neural network. It relies on explainable AI, does not require training, is fully reproducible, and fits in memory. Yet your prompts can retrieve relevant full text entities from the corpus with no latency โ€” including URLs, categories, titles, email addresses, and so on โ€” thanks to well-designed architecture.

Read more, get the code, paper and everything for free, at https://mltblog.com/4dNPSnB
โ€ฆ
Zero-shot VQA evaluation of Docmatix using LLM - do we need to fine-tune?
While developing Docmatix, we found that fine-tuning Florence-2 performed well on the DocVQA task, but still scored low on the benchmark. To improve the benchmark score, we had to further fine-tune the model on the DocVQA dataset to learn the grammatical style of the benchmark. Interestingly, the human evaluators felt that the additional fine-tuning seemed to perform worse than fine-tuning on Docmatix alone, so we decided to only use the additional fine-tuned model for ablation experiments and publicly release the model fine-tuned on Docmatix alone. Although the answers generated by the model are semantically consistent with the reference answers (as shown in Figure 1), the benchmark scores are low. This raises the question: should we fine-tune the model to improve performance on existing metrics, or should we develop new metrics that are more consistent with human perception?
๐Ÿ“… AI Event Scheduler - Streamline event creation with this AI Chrome extension, saving time and reducing manual errors.
๐Ÿ“š Cokeep - Transform bookmarks into collaborative spaces with AI organization, summarization, and team sharing capabilities.
๐ŸŽจ Crayon AI - Unleash creativity with an all-in-one AI image toolbox, with generation, editing, and optimization for all skill levels.
๐Ÿ–ฅ Tailwind Genie - Generate responsive UI designs with AI, streamlining web development using Tailwind CSS.
๐Ÿค— Video Ai Hug - Transform static photos into personalized hugging videos, bringing cherished moments to life.
๐Ÿ“ Postin - Supercharge your LinkedIn presence with AI-crafted posts, smart management, and engagement-boosting strategies.
๐Ÿ“Š Metastory AI v2.2 - Enhance project management with this v2.2 update from Metastory AI that now has Jira integration, project publishing, and an improved editor for streamlined collaboration.