HF-hub - Share and discover more about AI with social posts from the community.huggingface/OpenAi
Share and discover more about AI with social posts from the community.huggingface/OpenAi
0904-NVIDIA Launches NIM Microservices for Generative AI in Japan, Taiwan

Nations around the world are pursuing sovereign AI to produce artificial intelligence using their own computing infrastructure, data, workforce and business networks to ensure AI systems align with local values, laws and interests.

In support of these efforts, NVIDIA today announced the availability of four new NVIDIA NIM microservices that enable developers to more easily build and deploy high-performing generative AI applications.

The microservices support popular community models tailored to meet regional needs. They enhance user interactions through accurate understanding and improved responses based on local languages and cultural heritage.

In the Asia-Pacific region alone, generative AI software revenue is expected to reach $48 billion by 2030 — up from $5 billion this year, according to ABI Research.

Llama-3-Swallow-70B, trained on Japanese data, and Llama-3-Taiwan-70B, trained on Mandarin data, are regional language models that provide a deeper understanding of local laws, regulations and other customs.

The RakutenAI 7B family of models, built on Mistral-7B, were trained on English and Japanese datasets, and are available as two different NIM microservices for Chat and Instruct. Rakuten’s foundation and instruct models have achieved leading scores among open Japanese large language models, landing the top average score in the LM Evaluation Harness benchmark carried out from January to March 2024.

Training a large language model (LLM) on regional languages enhances the effectiveness of its outputs by ensuring more accurate and nuanced communication, as it better understands and reflects cultural and linguistic subtleties.

The models offer leading performance for Japanese and Mandarin language understanding, regional legal tasks, question-answering, and language translation and summarization compared with base LLMs like Llama 3.

Nations worldwide — from Singapore, the United Arab Emirates, South Korea and Sweden to France, Italy and India — are investing in sovereign AI infrastructure.

The new NIM microservices allow businesses, government agencies and universities to host native LLMs in their own environments, enabling developers to build advanced copilots, chatbots and AI assistants.https://blogs.nvidia.com/blog/nim-microservices-generative-ai/ NVIDIA Launches NIM Microservices for Generative AI in Japan, Taiwan
NEW Goldman, Nomura tap Meta Llama AI models

In the 18 months since launch, the mostly free open source Llama models have seen nearly 350 million downloads and been taken up several major firms, including in financial services.

In a progress report, Meta says that Goldman Sachs' GS AI Platform allows the bank's engineers to use Llama models for various use cases, including information extraction from documents.

Meanwhile, Nomura uses Llama on AWS to achieve faster innovation, transparency, bias guardrails, and performance across text summarisation, code generation, log analysis, and document processing.

Meta has ploughed billions of dollars into AI but is taking a different approach to rivals such as OpenAI with its open source model.

In a July letter, Mark Zuckerberg argued that open source AI is good for Meta because it prevents the firm getting locked into a competitor's closed ecosystem.

In addition, he, wrote: "The bottom line is that open source AI represents the world’s best shot at harnessing this technology to create the greatest economic opportunity and security for everyone."https://www.finextra.com/newsarticle/44650/goldman-nomura-tap-meta-llama-ai-models Goldman, Nomura tap Meta Llama AI models
AI ‘tiger’ MiniMax launches text-to-video-generating model to rival OpenAI’s Sora
Xinmei Shen
Published: 7:00pm, 2 Sep 2024
Chinese artificial intelligence (AI) start-up MiniMax has launched video-01, its new text-to-video-generating model, heating up competition with other mainland tech firms that look to catch up with the advances made by OpenAI’s Sora.
MiniMax – known as one of China’s AI “tigers” along with Zhipu AI, Baichuan and Moonshot AI – made video-01 available to the public via its website after unveiling the new tool at the company’s first developer conference in Shanghai on Saturday.
Video-01 enables a user to input a text description to create a video that is up to six seconds in length. The process from the text prompt to generating a video takes about two minutes.

MiniMax founder and chief executive Yan Junjie said at the event that video-01 is the first iteration of the firm’s video-generating tool. He pointed out that future updates will enable users to generate videos from images and to edit these videos, according to local media reports.https://img.i-scmp.com/cdn-cgi/image/fit=contain,width=1024,format=auto/sites/default/files/d8/images/canvas/2024/09/02/7b4222b5-84e6-4a26-8d36-3a1be489faff_46fe445f.jpg
Qwen2-VL-7B-Instruct
Introduction
We're excited to unveil Qwen2-VL, the latest iteration of our Qwen-VL model, representing nearly a year of innovation.

What’s New in Qwen2-VL?
Key Enhancements:
SoTA understanding of images of various resolution & ratio: Qwen2-VL achieves state-of-the-art performance on visual understanding benchmarks, including MathVista, DocVQA, RealWorldQA, MTVQA, etc.

Understanding videos of 20min+: Qwen2-VL can understand videos over 20 minutes for high-quality video-based question answering, dialog, content creation, etc.

Agent that can operate your mobiles, robots, etc.: with the abilities of complex reasoning and decision making, Qwen2-VL can be integrated with devices like mobile phones, robots, etc., for automatic operation based on visual environment and text instructions.

Multilingual Support: to serve global users, besides English and Chinese, Qwen2-VL now supports the understanding of texts in different languages inside images, including most European languages, Japanese, Korean, Arabic, Vietnamese, etc.

Model Architecture Updates:
Naive Dynamic Resolution: Unlike before, Qwen2-VL can handle arbitrary image resolutions, mapping them into a dynamic number of visual tokens, offering a more human-like visual processing experience.

Multimodal Rotary Position Embedding (M-ROPE): Decomposes positional embedding into parts to capture 1D textual, 2D visual, and 3D video positional information, enhancing its multimodal processing capabilities.

We have three models with 2, 7 and 72 billion parameters. This repo contains the instruction-tuned 7B Qwen2-VL model. For more information, visit our Blog and GitHub.https://github.com/QwenLM/Qwen2-VL GitHub - QwenLM/Qwen2-VL: Qwen2-VL is the multimodal large language model series developed by Qwen team, Alibaba Cloud.
🤖 𝗧𝗵𝗲 𝗔𝗜 𝗦𝗰𝗶𝗲𝗻𝘁𝗶𝘀𝘁: 𝗔𝗴𝗲𝗻𝘁𝗶𝗰, 𝗳𝘂𝗹𝗹𝘆-𝗮𝘂𝘁𝗼𝗺𝗮𝘁𝗲𝗱 𝗿𝗲𝘀𝗲𝗮𝗿𝗰𝗵 𝗽𝗶𝗽𝗲𝗹𝗶𝗻𝗲 𝗳𝗼𝗿 𝘂𝗻𝗱𝗲𝗿 $𝟭𝟱 𝗽𝗲𝗿 𝗽𝗮𝗽𝗲𝗿

Researchers have just created an AI system that 𝗰𝗮𝗻 𝗰𝗼𝗻𝗱𝘂𝗰𝘁 𝗲𝗻𝘁𝗶𝗿𝗲 𝗿𝗲𝘀𝗲𝗮𝗿𝗰𝗵 𝗽𝗿𝗼𝗷𝗲𝗰𝘁𝘀 𝗳𝗿𝗼𝗺 𝘀𝘁𝗮𝗿𝘁 𝘁𝗼 𝗳𝗶𝗻𝗶𝘀𝗵, 𝗽𝗼𝘁𝗲𝗻𝘁𝗶𝗮𝗹𝗹𝘆 𝗿𝗲𝘃𝗼𝗹𝘂𝘁𝗶𝗼𝗻𝗶𝘇𝗶𝗻𝗴 𝗵𝗼𝘄 𝘀𝗰𝗶𝗲𝗻𝘁𝗶𝗳𝗶𝗰 𝗱𝗶𝘀𝗰𝗼𝘃𝗲𝗿𝗶𝗲𝘀 𝗮𝗿𝗲 𝗺𝗮𝗱𝗲.

It doesn't just assist with specific tasks - it automates the entire research process, from generating ideas to writing and reviewing papers.
1 - brainstorm novel research directions, 2- write and execute code for experiments & visualize results, get references, and even 3- write up findings in a full academic paper format!

And it can do all this for under $15 per paper! 🤯

𝗞𝗲𝘆 𝗶𝗻𝘀𝗶𝗴𝗵𝘁𝘀:
🧠 Generates novel research ideas across multiple topics (e.g. diffusion modeling, transformers, learning dynamics aka “grokking”)
👨‍💻 Uses open-source coding assistant Aider to implement ideas and run experiments. This is especially important since this agentic assistant can iterate if it fails somewhere.
📊 Visualizes results and plans follow-up experiments (up to 5 rounds)
✍️ Writes full academic papers, including finding references using Semantic Search API
🕵️ Runs a simulated peer review process to evaluate paper quality
💰 Total cost per paper is under $15. This system can generate "hundreds of interesting, medium-quality papers" in just a week !

𝗦𝘁𝗶𝗹𝗹 𝗻𝗼𝘁 𝗿𝗲𝗮𝗱𝘆 𝘁𝗼 𝗳𝗶𝗹𝗹 𝗜𝗖𝗟𝗥 𝘄𝗶𝘁𝗵 𝗽𝗮𝗽𝗲𝗿𝘀:
🔁 Ideas generated in one domain tend to be repetitive across different runs, and even different language model
👀 Does not use vision capabilities to fix visual issues in plots
💭 Models occasionally hallucinate entire results tables
⇒ Only few of the generated papers would actually meet the threshold for acceptance at a top AI conference

👉 Read their paper:
The AI Scientist: Towards Fully Automated Open-Ended Scientific Discovery (2408.06292) https://huggingface.co/papers/2408.06292 Paper page - The AI Scientist: Towards Fully Automated Open-Ended Scientific  Discovery
Hey everyone 🤗!
Check out this awesome new model for object segmentation!
finegrain/finegrain-object-cutter
.

We (finegrain) have trained this new model in partnership with Nfinite and some of their synthetic data, the resulting model is incredibly accurate 🚀.
It’s all open source under the MIT license (
finegrain/finegrain-box-segmenter
), complete with a test set tailored for e-commerce (
finegrain/finegrain-product-masks-lite
). Have fun experimenting with it!
🙋🏻‍♂️hey there folks ,

✒️InkubaLM has been trained from scratch using 1.9 billion tokens of data for five African languages, along with English and French data, totaling 2.4 billion tokens of data. It is capable of understanding and generating content in five African languages: Swahili, Yoruba, Hausa, isiZulu, and isiXhosa, as well as English and French.

model
lelapa/InkubaLM-0.4B

demo
Tonic/Inkuba-0.4B
Spent a few minutes to build an alternative to Character AI on top of llama3.1 405B through SambaNova's super fast inference API

Space:
kz919/Persona-AI

API referral link: https://sambanova.ai/fast-api?api_ref=907266
I started training a public LoRA style (2 seperate training each on 4x A6000).

Experimenting captions vs non-captions. So we will see which yields best results for style training on FLUX.

Generated captions with multi-GPU batch Joycaption app.

I am showing 5 examples of what Joycaption generates on FLUX dev. Left images are the original style images from the dataset.

I used my multi-GPU Joycaption APP (used 8x A6000 for ultra fast captioning) : https://www.patreon.com/posts/110613301

I used my Gradio batch caption editor to edit some words and add activation token as ohwx 3d render : https://www.patreon.com/posts/108992085

The no caption dataset uses only ohwx 3d render as caption

I am using my newest 4x_GPU_Rank_1_SLOW_Better_Quality.json on 4X A6000 GPU and train 500 epochs — 114 images : https://www.patreon.com/posts/110879657

Total step count is being 500 * 114 / 4 (4x GPU — batch size 1) = 14250

Taking 37 hours currently if I don’t terminate early

Will save a checkpoint once every 25 epochs

Full Windows Kohya LoRA training tutorial : https://youtu.be/nySGu12Y05k

Full cloud tutorial I am still editing

Hopefully will share trained LoRA on Hugging Face and CivitAI along with full dataset including captions.

I got permission to share dataset but can’t be used commercially.

Also I will hopefully share full workflow in the CivitAI and Hugging Face LoRA pages. JoyCaption : Amazing Image Captioner - Batch Processing and Multi-GPU - Windows, RunPod, Massed Compute 1-Click Installers | SECourses:…
# Excited to Share: New LLM Tokenization - Convert Text to tokens and vice versa! 🚀

I've just developed a powerful tool for anyone working with Language Models (LLMs) or diving into Natural Language Processing (NLP).

🔍 Introducing the LLM Tokenization - Convert Text to tokens and vice versa!!

Key Features:
- Convert text to tokens and token IDs
- Reverse engineer: convert token IDs back to text
- Support for popular models: LLama3 (Will add more models iteratively)
- User-friendly Gradio interface for easy interaction

Whether you're debugging your NLP pipeline, exploring how different models tokenize text, or just curious about the inner workings of LLMs, this tool is for you!

👩‍💻 Tech Stack:
- Python
- Gradio for the web interface
- Hugging Face Transformers for tokenization

The application is deployed in Hugging Face spaces as Gradio application

🔗 Try it out: https://lnkd.in/g6R5z9k2

#NLP #MachineLearning #AI #PythonDevelopment #OpenSourceAI
𝐍𝐞𝐰 𝐑𝐞𝐥𝐞𝐚𝐬𝐞: 𝐌𝐚𝐣𝐨𝐫 𝐓𝐎𝐌 𝐃𝐢𝐠𝐢𝐭𝐚𝐥 𝐄𝐥𝐞𝐯𝐚𝐭𝐢𝐨𝐧 𝐌𝐨𝐝𝐞𝐥 𝐄𝐱𝐩𝐚𝐧𝐬𝐢𝐨𝐧 🗺

Dataset:
Major-TOM/Core-DEM


Today with European Space Agency - ESA and Adobe Research, we release a global expansion to Major TOM with GLO-30 DEM data.

You can now instantly access nearly 2M of Major TOM samples with elevation data to build your next AI model for EO. 🌍

🔍 Browse the data in our usual viewer app:
Major-TOM/MajorTOM-Core-Viewer


Fantastic work championed by Paul Borne--Pons @NewtNewt 🚀
𝐌𝐲 𝐟𝐢𝐫𝐬𝐭 𝐜𝐨𝐦𝐦𝐮𝐧𝐢𝐭𝐲 𝐚𝐫𝐭𝐢𝐜𝐥𝐞! 𝐒𝐞𝐥𝐞𝐜𝐭𝐢𝐯𝐞 𝐟𝐢𝐧𝐞-𝐭𝐮𝐧𝐢𝐧𝐠 𝐰𝐢𝐭𝐡 𝐒𝐩𝐞𝐜𝐭𝐫𝐮𝐦 🎯

Full walkthrough on how to get started with Spectrum and TRL for efficient fine-tuning.
📔 👣 https://huggingface.co/blog/anakin87/spectrum

---

Looking to fine-tune Language Models efficiently and save on computational resources?

One popular method is QLoRa, which quantizes the original model and trains low-rank adapters on top.
It's quite effective and uses less GPU than full fine-tuning.

However, QLoRa applies Low-Rank Adaptation uniformly across the entire model.

What if we could identify the most informative layers and only fine-tune those? 🤔

This is exactly what Spectrum does! 👇

🔬 Spectrum analyzes the weight matrices for all layers in a Language Model and calculates a Signal to Noise Ratio (SNR) for each one.
(It uses Random Matrix Theory and Marchenko-Pastur distribution to distinguish signal from noise.)

🎯 Based on a chosen percentage (say, 25%), Spectrum selects the most informative layers of each type (mlp.down_proj, self_attn.o_proj, etc.).

You can then ❄️ freeze the rest of the model and focus your 🏋️‍♂️ training on the chosen layers.


🏆 Results/Evaluation
- Spectrum is competitive with full fine-tuning and beats QLoRA on benchmarks.
- While QLoRA is more memory-efficient on a single GPU, Spectrum shines in distributed training setups.
- Great models trained with Spectrum: Dolphin models, Llama 3.1 Storm, numerous models by VAGO Solutions...

---

For a practical guide, check out the article above. Selective fine-tuning of Language Models with Spectrum
The Forward-Forward Algorithm🤖

FFA replaces the forward and backward passes in backpropagtion with two forward passes - one with positive (real) data and another with negative data. Each layer has its objective function - to increase or decrease a “goodness" metric. The positive pass uses real data and adjusts weights to increase “goodness” in every hidden layer. The negative pass does the opposite.

I must say reading&Implementing a godfather paper feels quite fulfilling:)
Thank you Prof. Geoffrey Hinton.

Code: https://github.com/Jaykef/ai-algorithms/blob/main/mnist_the_forward_forward_algor
Is AI’s impact on elections being overblown? Three researchers think so in this opinion piece published in the MIT Tech Review.

Highlights:

• “AI is being used to try to influence electoral processes, but these efforts have not been fruitful.”
• “Why were these initial speculations about AI-enabled electoral interference so off (…) ? The short answer: Because they ignored decades of research on the limited influence of mass persuasion campaigns, the complex determinants of voting behaviors, and the indirect and human-mediated causal role of technology.”
• “Yet we should remember that there’s a cost to overreaction based on ill-founded assumptions, especially when other critical issues go unaddressed.”

👉Read more here: https://technologyreview.com/2024/09/03/1103464/ai-impact-elections-overblown/ AI’s impact on elections is being overblown
🚨 𝗛𝘂𝗺𝗮𝗻 𝗙𝗲𝗲𝗱𝗯𝗮𝗰𝗸 𝗳𝗼𝗿 𝗔𝗜 𝘁𝗿𝗮𝗶𝗻𝗶𝗻𝗴: 𝗡𝗼𝘁 𝘁𝗵𝗲 𝗴𝗼𝗹𝗱𝗲𝗻 𝗴𝗼𝗼𝘀𝗲 𝘄𝗲 𝘁𝗵𝗼𝘂𝗴𝗵𝘁?

I’ve just read a great paper where Cohere researchers raises significant questions about using Human feedback to evaluate AI language models.

Human feedback is often regarded as the gold standard for judging AI performance, but it turns out, it might be more like fool's gold : the study reveals that our human judgments are easily swayed by factors that have nothing to do with actual AI performance.

𝗞𝗲𝘆 𝗶𝗻𝘀𝗶𝗴𝗵𝘁𝘀:
🧠 Test several models: Llama-2, Falcon-40B, Cohere Command 6 and 52B 🙅‍♂️ Refusing to answer tanks AI ratings more than getting facts wrong. We apparently prefer a wrong answer to no answer!

💪 Confidence is key (even when it shouldn't be): More assertive AI responses are seen as more factual, even when they're not. This could be pushing AI development in the wrong direction, with systems like RLHF.

🎭 The assertiveness trap: As AI responses get more confident-sounding, non-expert annotators become less likely to notice when they're wrong or inconsistent.

And a consequence of the above:
🔄 𝗥𝗟𝗛𝗙 𝗺𝗶𝗴𝗵𝘁 𝗯𝗮𝗰𝗸𝗳𝗶𝗿𝗲: Using human feedback to train AI (Reinforcement Learning from Human Feedback) could accidentally make AI more overconfident and less accurate.

This paper means we need to think carefully about how we evaluate and train AI systems to ensure we're rewarding correctness over apparences of it like confident talk.

⛔️ Chatbot Arena’s ELO leaderboard, based on crowdsourced answers from average joes like you and me, might become completely irrelevant as models will become smarter and smarter.

Read the paper 👉
Human Feedback is not Gold Standard (2309.16349)https://huggingface.co/papers/2309.16349 Paper page - Human Feedback is not Gold Standard
Hyperfast Contextual Custom LLM with Agents, Multitokens, Explainable AI, and Distillation https://mltblog.com/4dNPSnB

New additions to this ground-breaking system include multi-token distillation when processing prompts, agents to meet user intent, more NLP, and a command prompt menu accepting both standard prompts and various actions.

I also added several illustrations, featuring xLLM in action with a full session and sample commands to fine-tune in real-time. All the code, input sources (anonymized corporate corpus from fortune 100 company), contextual backend tables including embeddings, are on GitHub. My system has zero weight, no transformer, and no neural network. It relies on explainable AI, does not require training, is fully reproducible, and fits in memory. Yet your prompts can retrieve relevant full text entities from the corpus with no latency — including URLs, categories, titles, email addresses, and so on — thanks to well-designed architecture.

Read more, get the code, paper and everything for free, at https://mltblog.com/4dNPSnB
Zero-shot VQA evaluation of Docmatix using LLM - do we need to fine-tune?
While developing Docmatix, we found that fine-tuning Florence-2 performed well on the DocVQA task, but still scored low on the benchmark. To improve the benchmark score, we had to further fine-tune the model on the DocVQA dataset to learn the grammatical style of the benchmark. Interestingly, the human evaluators felt that the additional fine-tuning seemed to perform worse than fine-tuning on Docmatix alone, so we decided to only use the additional fine-tuned model for ablation experiments and publicly release the model fine-tuned on Docmatix alone. Although the answers generated by the model are semantically consistent with the reference answers (as shown in Figure 1), the benchmark scores are low. This raises the question: should we fine-tune the model to improve performance on existing metrics, or should we develop new metrics that are more consistent with human perception?
📅 AI Event Scheduler - Streamline event creation with this AI Chrome extension, saving time and reducing manual errors.