HF-hub - Share and discover more about AI with social posts from the community.huggingface/OpenAi
Share and discover more about AI with social posts from the community.huggingface/OpenAi
Just wrapped up a deep dive into the latest lecture on building LLMs, such as ChatGPT, from @Stanford CS229 course. Here are my top takeaways:

๐Ÿ” Understanding the Components: LLMs like ChatGPT, Claude, and others are more than just neural networks; they are a complex blend of architecture, training loss, data evaluation, and systems. Knowing how these components work together is key to improving and scaling these models.

๐Ÿ“Š Scaling Matters: Performance improves predictably with more data, bigger models, and greater computational power. However, balancing these factors is crucial to avoid overfitting and resource waste.

๐Ÿ“ˆ Data is King: LLMs are trained on trillions of tokens scraped from the internet, but the quality of this data matters immensely. Rigorous filtering and deduplication processes are essential to maintaining data integrity.

๐Ÿ— Pre-Training vs. Post-Training: While pre-training equips the model with general knowledge, post-training (like RLHF) fine-tunes it to follow human-like responses, reducing toxic outputs and improving alignment with human values.

๐ŸŒ Reinforcement Learning from Human Feedback (RLHF): This technique allows LLMs to maximize outputs that align with human preferences, making models more reliable and accurate.

๐Ÿ’ก Why It Matters: Understanding these processes not only helps us appreciate the complexity behind our everyday AI tools but also highlights the challenges and opportunities in the ever-evolving field of AI.

Whether youโ€™re in tech, data science, or just AI-curious, staying updated on these advancements is crucial. LLMs are not just transforming industries; theyโ€™re redefining the future of human-computer interaction!

I just realized this was almost 2 hours long...

Link: https://www.youtube.com/watch?v=9vM4p9NN0Ts
Meta Platforms to use social media posts from Europe to train AI
Meta will train its large language models using content that people in the European Union have chosen to share publicly on its platforms such as Instagram, Facebook. PHOTO: REUTERS
Meta will train its large language models using content that people in the European Union have chosen to share publicly on its platforms such as Instagram, Facebook. PHOTO: REUTERS
Meta will train its large language models using content that people in the European Union have chosen to share publicly on its platforms such as Instagram, Facebook. PHOTO: REUTERS
FACEBOOK owner Meta Platforms plans to start incorporating social media content from Europe to train its generative artificial intelligence models, the company said on Monday (Jun 10).

Meta will train its Llama large language models using content that people in the European Union have chosen to share publicly on its platforms such as Instagram and Facebook, it said in a blog post.

The shift appears to bring the companyโ€™s approach in Europe roughly in line with how it treats the data it feeds into its AI models from elsewhere around the world, despite earlier caution due to stringent EU privacy and transparency regulations.

Metaโ€™s top policy executive told Reuters in an interview in September that it uses public Facebook and Instagram posts to train its Llama models, while excluding private posts and messages shared only with friends.

As of April, when the company started releasing the latest versions of Llama, Meta was โ€œstill working on the right way to do this in Europe,โ€ its chief product officer told Reuters at the time.

The social media giant said last month that it would start notifying Facebook and Instagram users in the European region and the United Kingdom about how it uses public information shared on Metaโ€™s services to develop and improve AI.https://www.businesstimes.com.sg/companies-markets/telcos-media-tech/meta-platforms-use-social-media-posts-europe-train-ai Meta Platforms to use social media posts from Europe to train AI
Chinese and US scientists create AI model to help develop new drugs

Victoria Bela
Published: 6:30pm, 26 Aug 2024
Scientists in China and the United States say they have developed a new artificial intelligence (AI) model that could help overcome some major challenges to drug development and discovery.

The model, called ActFound, outperforms competing models while bypassing challenges to using machine learning in bioactivity prediction, according to a paper published in Nature Machine Intelligence.

โ€œBioactivity encompasses various properties of compounds, such as their interaction with targets, impact on biological systems and therapeutic effects,โ€ said the researchers from Peking University, the University of Washington and AI tech firm INF Technology Shanghai.

The main challenges to using machine learning include limited data labelling and incompatibility between assays, the tests that measure the activity or potency of drugs.

The model not only outperforms competing AI models, but also functions as well as free-energy perturbation (FEP) โ€“ a traditional computational method.

Although FEP calculations have a high level of accuracy, the team warned that they โ€œrequire extensive computational resources that are often not affordable for large-scale applicationsโ€.

Such methods often rely on hard-to-obtain, three-dimensional protein structures to run, which can only be obtained using expensive equipment and extensive laboratory procedures.
0904-NVIDIA Launches NIM Microservices for Generative AI in Japan, Taiwan

Nations around the world are pursuing sovereign AI to produce artificial intelligence using their own computing infrastructure, data, workforce and business networks to ensure AI systems align with local values, laws and interests.

In support of these efforts, NVIDIA today announced the availability of four new NVIDIA NIM microservices that enable developers to more easily build and deploy high-performing generative AI applications.

The microservices support popular community models tailored to meet regional needs. They enhance user interactions through accurate understanding and improved responses based on local languages and cultural heritage.

In the Asia-Pacific region alone, generative AI software revenue is expected to reach $48 billion by 2030 โ€” up from $5 billion this year, according to ABI Research.

Llama-3-Swallow-70B, trained on Japanese data, and Llama-3-Taiwan-70B, trained on Mandarin data, are regional language models that provide a deeper understanding of local laws, regulations and other customs.

The RakutenAI 7B family of models, built on Mistral-7B, were trained on English and Japanese datasets, and are available as two different NIM microservices for Chat and Instruct. Rakutenโ€™s foundation and instruct models have achieved leading scores among open Japanese large language models, landing the top average score in the LM Evaluation Harness benchmark carried out from January to March 2024.

Training a large language model (LLM) on regional languages enhances the effectiveness of its outputs by ensuring more accurate and nuanced communication, as it better understands and reflects cultural and linguistic subtleties.

The models offer leading performance for Japanese and Mandarin language understanding, regional legal tasks, question-answering, and language translation and summarization compared with base LLMs like Llama 3.

Nations worldwide โ€” from Singapore, the United Arab Emirates, South Korea and Sweden to France, Italy and India โ€” are investing in sovereign AI infrastructure.

The new NIM microservices allow businesses, government agencies and universities to host native LLMs in their own environments, enabling developers to build advanced copilots, chatbots and AI assistants.https://blogs.nvidia.com/blog/nim-microservices-generative-ai/ NVIDIA Launches NIM Microservices for Generative AI in Japan, Taiwan
NEW Goldman, Nomura tap Meta Llama AI models

In the 18 months since launch, the mostly free open source Llama models have seen nearly 350 million downloads and been taken up several major firms, including in financial services.

In a progress report, Meta says that Goldman Sachs' GS AI Platform allows the bank's engineers to use Llama models for various use cases, including information extraction from documents.

Meanwhile, Nomura uses Llama on AWS to achieve faster innovation, transparency, bias guardrails, and performance across text summarisation, code generation, log analysis, and document processing.

Meta has ploughed billions of dollars into AI but is taking a different approach to rivals such as OpenAI with its open source model.

In a July letter, Mark Zuckerberg argued that open source AI is good for Meta because it prevents the firm getting locked into a competitor's closed ecosystem.

In addition, he, wrote: "The bottom line is that open source AI represents the worldโ€™s best shot at harnessing this technology to create the greatest economic opportunity and security for everyone."https://www.finextra.com/newsarticle/44650/goldman-nomura-tap-meta-llama-ai-models Goldman, Nomura tap Meta Llama AI models
AI โ€˜tigerโ€™ MiniMax launches text-to-video-generating model to rival OpenAIโ€™s Sora
Xinmei Shen
Published: 7:00pm, 2 Sep 2024
Chinese artificial intelligence (AI) start-up MiniMax has launched video-01, its new text-to-video-generating model, heating up competition with other mainland tech firms that look to catch up with the advances made by OpenAIโ€™s Sora.
MiniMax โ€“ known as one of Chinaโ€™s AI โ€œtigersโ€ along with Zhipu AI, Baichuan and Moonshot AI โ€“ made video-01 available to the public via its website after unveiling the new tool at the companyโ€™s first developer conference in Shanghai on Saturday.
Video-01 enables a user to input a text description to create a video that is up to six seconds in length. The process from the text prompt to generating a video takes about two minutes.

MiniMax founder and chief executive Yan Junjie said at the event that video-01 is the first iteration of the firmโ€™s video-generating tool. He pointed out that future updates will enable users to generate videos from images and to edit these videos, according to local media reports.https://img.i-scmp.com/cdn-cgi/image/fit=contain,width=1024,format=auto/sites/default/files/d8/images/canvas/2024/09/02/7b4222b5-84e6-4a26-8d36-3a1be489faff_46fe445f.jpg
Qwen2-VL-7B-Instruct
Introduction
We're excited to unveil Qwen2-VL, the latest iteration of our Qwen-VL model, representing nearly a year of innovation.

Whatโ€™s New in Qwen2-VL?
Key Enhancements:
SoTA understanding of images of various resolution & ratio: Qwen2-VL achieves state-of-the-art performance on visual understanding benchmarks, including MathVista, DocVQA, RealWorldQA, MTVQA, etc.

Understanding videos of 20min+: Qwen2-VL can understand videos over 20 minutes for high-quality video-based question answering, dialog, content creation, etc.

Agent that can operate your mobiles, robots, etc.: with the abilities of complex reasoning and decision making, Qwen2-VL can be integrated with devices like mobile phones, robots, etc., for automatic operation based on visual environment and text instructions.

Multilingual Support: to serve global users, besides English and Chinese, Qwen2-VL now supports the understanding of texts in different languages inside images, including most European languages, Japanese, Korean, Arabic, Vietnamese, etc.

Model Architecture Updates:
Naive Dynamic Resolution: Unlike before, Qwen2-VL can handle arbitrary image resolutions, mapping them into a dynamic number of visual tokens, offering a more human-like visual processing experience.

Multimodal Rotary Position Embedding (M-ROPE): Decomposes positional embedding into parts to capture 1D textual, 2D visual, and 3D video positional information, enhancing its multimodal processing capabilities.

We have three models with 2, 7 and 72 billion parameters. This repo contains the instruction-tuned 7B Qwen2-VL model. For more information, visit our Blog and GitHub.https://github.com/QwenLM/Qwen2-VL GitHub - QwenLM/Qwen2-VL: Qwen2-VL is the multimodal large language model series developed by Qwen team, Alibaba Cloud.
๐Ÿค– ๐—ง๐—ต๐—ฒ ๐—”๐—œ ๐—ฆ๐—ฐ๐—ถ๐—ฒ๐—ป๐˜๐—ถ๐˜€๐˜: ๐—”๐—ด๐—ฒ๐—ป๐˜๐—ถ๐—ฐ, ๐—ณ๐˜‚๐—น๐—น๐˜†-๐—ฎ๐˜‚๐˜๐—ผ๐—บ๐—ฎ๐˜๐—ฒ๐—ฑ ๐—ฟ๐—ฒ๐˜€๐—ฒ๐—ฎ๐—ฟ๐—ฐ๐—ต ๐—ฝ๐—ถ๐—ฝ๐—ฒ๐—น๐—ถ๐—ป๐—ฒ ๐—ณ๐—ผ๐—ฟ ๐˜‚๐—ป๐—ฑ๐—ฒ๐—ฟ $๐Ÿญ๐Ÿฑ ๐—ฝ๐—ฒ๐—ฟ ๐—ฝ๐—ฎ๐—ฝ๐—ฒ๐—ฟ

Researchers have just created an AI system that ๐—ฐ๐—ฎ๐—ป ๐—ฐ๐—ผ๐—ป๐—ฑ๐˜‚๐—ฐ๐˜ ๐—ฒ๐—ป๐˜๐—ถ๐—ฟ๐—ฒ ๐—ฟ๐—ฒ๐˜€๐—ฒ๐—ฎ๐—ฟ๐—ฐ๐—ต ๐—ฝ๐—ฟ๐—ผ๐—ท๐—ฒ๐—ฐ๐˜๐˜€ ๐—ณ๐—ฟ๐—ผ๐—บ ๐˜€๐˜๐—ฎ๐—ฟ๐˜ ๐˜๐—ผ ๐—ณ๐—ถ๐—ป๐—ถ๐˜€๐—ต, ๐—ฝ๐—ผ๐˜๐—ฒ๐—ป๐˜๐—ถ๐—ฎ๐—น๐—น๐˜† ๐—ฟ๐—ฒ๐˜ƒ๐—ผ๐—น๐˜‚๐˜๐—ถ๐—ผ๐—ป๐—ถ๐˜‡๐—ถ๐—ป๐—ด ๐—ต๐—ผ๐˜„ ๐˜€๐—ฐ๐—ถ๐—ฒ๐—ป๐˜๐—ถ๐—ณ๐—ถ๐—ฐ ๐—ฑ๐—ถ๐˜€๐—ฐ๐—ผ๐˜ƒ๐—ฒ๐—ฟ๐—ถ๐—ฒ๐˜€ ๐—ฎ๐—ฟ๐—ฒ ๐—บ๐—ฎ๐—ฑ๐—ฒ.

It doesn't just assist with specific tasks - it automates the entire research process, from generating ideas to writing and reviewing papers.
1 - brainstorm novel research directions, 2- write and execute code for experiments & visualize results, get references, and even 3- write up findings in a full academic paper format!

And it can do all this for under $15 per paper! ๐Ÿคฏ

๐—ž๐—ฒ๐˜† ๐—ถ๐—ป๐˜€๐—ถ๐—ด๐—ต๐˜๐˜€:
๐Ÿง  Generates novel research ideas across multiple topics (e.g. diffusion modeling, transformers, learning dynamics aka โ€œgrokkingโ€)
๐Ÿ‘จโ€๐Ÿ’ป Uses open-source coding assistant Aider to implement ideas and run experiments. This is especially important since this agentic assistant can iterate if it fails somewhere.
๐Ÿ“Š Visualizes results and plans follow-up experiments (up to 5 rounds)
โœ๏ธ Writes full academic papers, including finding references using Semantic Search API
๐Ÿ•ต๏ธ Runs a simulated peer review process to evaluate paper quality
๐Ÿ’ฐ Total cost per paper is under $15. This system can generate "hundreds of interesting, medium-quality papers" in just a week !

๐—ฆ๐˜๐—ถ๐—น๐—น ๐—ป๐—ผ๐˜ ๐—ฟ๐—ฒ๐—ฎ๐—ฑ๐˜† ๐˜๐—ผ ๐—ณ๐—ถ๐—น๐—น ๐—œ๐—–๐—Ÿ๐—ฅ ๐˜„๐—ถ๐˜๐—ต ๐—ฝ๐—ฎ๐—ฝ๐—ฒ๐—ฟ๐˜€:
๐Ÿ” Ideas generated in one domain tend to be repetitive across different runs, and even different language model
๐Ÿ‘€ Does not use vision capabilities to fix visual issues in plots
๐Ÿ’ญ Models occasionally hallucinate entire results tables
โ‡’ Only few of the generated papers would actually meet the threshold for acceptance at a top AI conference

๐Ÿ‘‰ Read their paper:
The AI Scientist: Towards Fully Automated Open-Ended Scientific Discovery (2408.06292) https://huggingface.co/papers/2408.06292 Paper page - The AI Scientist: Towards Fully Automated Open-Ended Scientific  Discovery
Hey everyone ๐Ÿค—!
Check out this awesome new model for object segmentation!
finegrain/finegrain-object-cutter
.

We (finegrain) have trained this new model in partnership with Nfinite and some of their synthetic data, the resulting model is incredibly accurate ๐Ÿš€.
Itโ€™s all open source under the MIT license (
finegrain/finegrain-box-segmenter
), complete with a test set tailored for e-commerce (
finegrain/finegrain-product-masks-lite
). Have fun experimenting with it!
๐Ÿ™‹๐Ÿปโ€โ™‚๏ธhey there folks ,

โœ’๏ธInkubaLM has been trained from scratch using 1.9 billion tokens of data for five African languages, along with English and French data, totaling 2.4 billion tokens of data. It is capable of understanding and generating content in five African languages: Swahili, Yoruba, Hausa, isiZulu, and isiXhosa, as well as English and French.

model
lelapa/InkubaLM-0.4B

demo
Tonic/Inkuba-0.4B
Spent a few minutes to build an alternative to Character AI on top of llama3.1 405B through SambaNova's super fast inference API

Space:
kz919/Persona-AI

API referral link: https://sambanova.ai/fast-api?api_ref=907266
I started training a public LoRA style (2 seperate training each on 4x A6000).

Experimenting captions vs non-captions. So we will see which yields best results for style training on FLUX.

Generated captions with multi-GPU batch Joycaption app.

I am showing 5 examples of what Joycaption generates on FLUX dev. Left images are the original style images from the dataset.

I used my multi-GPU Joycaption APP (used 8x A6000 for ultra fast captioning) : https://www.patreon.com/posts/110613301

I used my Gradio batch caption editor to edit some words and add activation token as ohwx 3d render : https://www.patreon.com/posts/108992085

The no caption dataset uses only ohwx 3d render as caption

I am using my newest 4x_GPU_Rank_1_SLOW_Better_Quality.json on 4X A6000 GPU and train 500 epochs โ€” 114 images : https://www.patreon.com/posts/110879657

Total step count is being 500 * 114 / 4 (4x GPU โ€” batch size 1) = 14250

Taking 37 hours currently if I donโ€™t terminate early

Will save a checkpoint once every 25 epochs

Full Windows Kohya LoRA training tutorial : https://youtu.be/nySGu12Y05k

Full cloud tutorial I am still editing

Hopefully will share trained LoRA on Hugging Face and CivitAI along with full dataset including captions.

I got permission to share dataset but canโ€™t be used commercially.

Also I will hopefully share full workflow in the CivitAI and Hugging Face LoRA pages. JoyCaption : Amazing Image Captioner - Batch Processing and Multi-GPU - Windows, RunPod, Massed Compute 1-Click Installers | SECourses:โ€ฆ
# Excited to Share: New LLM Tokenization - Convert Text to tokens and vice versa! ๐Ÿš€

I've just developed a powerful tool for anyone working with Language Models (LLMs) or diving into Natural Language Processing (NLP).

๐Ÿ” Introducing the LLM Tokenization - Convert Text to tokens and vice versa!!

Key Features:
- Convert text to tokens and token IDs
- Reverse engineer: convert token IDs back to text
- Support for popular models: LLama3 (Will add more models iteratively)
- User-friendly Gradio interface for easy interaction

Whether you're debugging your NLP pipeline, exploring how different models tokenize text, or just curious about the inner workings of LLMs, this tool is for you!

๐Ÿ‘ฉโ€๐Ÿ’ป Tech Stack:
- Python
- Gradio for the web interface
- Hugging Face Transformers for tokenization

The application is deployed in Hugging Face spaces as Gradio application

๐Ÿ”— Try it out: https://lnkd.in/g6R5z9k2

#NLP #MachineLearning #AI #PythonDevelopment #OpenSourceAI
๐๐ž๐ฐ ๐‘๐ž๐ฅ๐ž๐š๐ฌ๐ž: ๐Œ๐š๐ฃ๐จ๐ซ ๐“๐Ž๐Œ ๐ƒ๐ข๐ ๐ข๐ญ๐š๐ฅ ๐„๐ฅ๐ž๐ฏ๐š๐ญ๐ข๐จ๐ง ๐Œ๐จ๐๐ž๐ฅ ๐„๐ฑ๐ฉ๐š๐ง๐ฌ๐ข๐จ๐ง ๐Ÿ—บ

Dataset:
Major-TOM/Core-DEM


Today with European Space Agency - ESA and Adobe Research, we release a global expansion to Major TOM with GLO-30 DEM data.

You can now instantly access nearly 2M of Major TOM samples with elevation data to build your next AI model for EO. ๐ŸŒ

๐Ÿ” Browse the data in our usual viewer app:
Major-TOM/MajorTOM-Core-Viewer


Fantastic work championed by Paul Borne--Pons @NewtNewt ๐Ÿš€
๐Œ๐ฒ ๐Ÿ๐ข๐ซ๐ฌ๐ญ ๐œ๐จ๐ฆ๐ฆ๐ฎ๐ง๐ข๐ญ๐ฒ ๐š๐ซ๐ญ๐ข๐œ๐ฅ๐ž! ๐’๐ž๐ฅ๐ž๐œ๐ญ๐ข๐ฏ๐ž ๐Ÿ๐ข๐ง๐ž-๐ญ๐ฎ๐ง๐ข๐ง๐  ๐ฐ๐ข๐ญ๐ก ๐’๐ฉ๐ž๐œ๐ญ๐ซ๐ฎ๐ฆ ๐ŸŽฏ

Full walkthrough on how to get started with Spectrum and TRL for efficient fine-tuning.
๐Ÿ“” ๐Ÿ‘ฃ https://huggingface.co/blog/anakin87/spectrum

---

Looking to fine-tune Language Models efficiently and save on computational resources?

One popular method is QLoRa, which quantizes the original model and trains low-rank adapters on top.
It's quite effective and uses less GPU than full fine-tuning.

However, QLoRa applies Low-Rank Adaptation uniformly across the entire model.

What if we could identify the most informative layers and only fine-tune those? ๐Ÿค”

This is exactly what Spectrum does! ๐Ÿ‘‡

๐Ÿ”ฌ Spectrum analyzes the weight matrices for all layers in a Language Model and calculates a Signal to Noise Ratio (SNR) for each one.
(It uses Random Matrix Theory and Marchenko-Pastur distribution to distinguish signal from noise.)

๐ŸŽฏ Based on a chosen percentage (say, 25%), Spectrum selects the most informative layers of each type (mlp.down_proj, self_attn.o_proj, etc.).

You can then โ„๏ธ freeze the rest of the model and focus your ๐Ÿ‹๏ธโ€โ™‚๏ธ training on the chosen layers.


๐Ÿ† Results/Evaluation
- Spectrum is competitive with full fine-tuning and beats QLoRA on benchmarks.
- While QLoRA is more memory-efficient on a single GPU, Spectrum shines in distributed training setups.
- Great models trained with Spectrum: Dolphin models, Llama 3.1 Storm, numerous models by VAGO Solutions...

---

For a practical guide, check out the article above. Selective fine-tuning of Language Models with Spectrum
The Forward-Forward Algorithm๐Ÿค–

FFA replaces the forward and backward passes in backpropagtion with two forward passes - one with positive (real) data and another with negative data. Each layer has its objective function - to increase or decrease a โ€œgoodness" metric. The positive pass uses real data and adjusts weights to increase โ€œgoodnessโ€ in every hidden layer. The negative pass does the opposite.

I must say reading&Implementing a godfather paper feels quite fulfilling:)
Thank you Prof. Geoffrey Hinton.

Code: https://github.com/Jaykef/ai-algorithms/blob/main/mnist_the_forward_forward_algor
Is AIโ€™s impact on elections being overblown? Three researchers think so in this opinion piece published in the MIT Tech Review.

Highlights:

โ€ข โ€œAI is being used to try to influence electoral processes, but these efforts have not been fruitful.โ€
โ€ข โ€œWhy were these initial speculations about AI-enabled electoral interference so off (โ€ฆ) ? The short answer: Because they ignored decades of research on the limited influence of mass persuasion campaigns, the complex determinants of voting behaviors, and the indirect and human-mediated causal role of technology.โ€
โ€ข โ€œYet we should remember that thereโ€™s a cost to overreaction based on ill-founded assumptions, especially when other critical issues go unaddressed.โ€

๐Ÿ‘‰Read more here: https://technologyreview.com/2024/09/03/1103464/ai-impact-elections-overblown/ AIโ€™s impact on elections is being overblown
๐Ÿšจ ๐—›๐˜‚๐—บ๐—ฎ๐—ป ๐—™๐—ฒ๐—ฒ๐—ฑ๐—ฏ๐—ฎ๐—ฐ๐—ธ ๐—ณ๐—ผ๐—ฟ ๐—”๐—œ ๐˜๐—ฟ๐—ฎ๐—ถ๐—ป๐—ถ๐—ป๐—ด: ๐—ก๐—ผ๐˜ ๐˜๐—ต๐—ฒ ๐—ด๐—ผ๐—น๐—ฑ๐—ฒ๐—ป ๐—ด๐—ผ๐—ผ๐˜€๐—ฒ ๐˜„๐—ฒ ๐˜๐—ต๐—ผ๐˜‚๐—ด๐—ต๐˜?

Iโ€™ve just read a great paper where Cohere researchers raises significant questions about using Human feedback to evaluate AI language models.

Human feedback is often regarded as the gold standard for judging AI performance, but it turns out, it might be more like fool's gold : the study reveals that our human judgments are easily swayed by factors that have nothing to do with actual AI performance.

๐—ž๐—ฒ๐˜† ๐—ถ๐—ป๐˜€๐—ถ๐—ด๐—ต๐˜๐˜€:
๐Ÿง  Test several models: Llama-2, Falcon-40B, Cohere Command 6 and 52B ๐Ÿ™…โ€โ™‚๏ธ Refusing to answer tanks AI ratings more than getting facts wrong. We apparently prefer a wrong answer to no answer!

๐Ÿ’ช Confidence is key (even when it shouldn't be): More assertive AI responses are seen as more factual, even when they're not. This could be pushing AI development in the wrong direction, with systems like RLHF.

๐ŸŽญ The assertiveness trap: As AI responses get more confident-sounding, non-expert annotators become less likely to notice when they're wrong or inconsistent.

And a consequence of the above:
๐Ÿ”„ ๐—ฅ๐—Ÿ๐—›๐—™ ๐—บ๐—ถ๐—ด๐—ต๐˜ ๐—ฏ๐—ฎ๐—ฐ๐—ธ๐—ณ๐—ถ๐—ฟ๐—ฒ: Using human feedback to train AI (Reinforcement Learning from Human Feedback) could accidentally make AI more overconfident and less accurate.

This paper means we need to think carefully about how we evaluate and train AI systems to ensure we're rewarding correctness over apparences of it like confident talk.

โ›”๏ธ Chatbot Arenaโ€™s ELO leaderboard, based on crowdsourced answers from average joes like you and me, might become completely irrelevant as models will become smarter and smarter.

Read the paper ๐Ÿ‘‰
Human Feedback is not Gold Standard (2309.16349)https://huggingface.co/papers/2309.16349 Paper page - Human Feedback is not Gold Standard
Hyperfast Contextual Custom LLM with Agents, Multitokens, Explainable AI, and Distillation https://mltblog.com/4dNPSnB

New additions to this ground-breaking system include multi-token distillation when processing prompts, agents to meet user intent, more NLP, and a command prompt menu accepting both standard prompts and various actions.

I also added several illustrations, featuring xLLM in action with a full session and sample commands to fine-tune in real-time. All the code, input sources (anonymized corporate corpus from fortune 100 company), contextual backend tables including embeddings, are on GitHub. My system has zero weight, no transformer, and no neural network. It relies on explainable AI, does not require training, is fully reproducible, and fits in memory. Yet your prompts can retrieve relevant full text entities from the corpus with no latency โ€” including URLs, categories, titles, email addresses, and so on โ€” thanks to well-designed architecture.

Read more, get the code, paper and everything for free, at https://mltblog.com/4dNPSnB
โ€ฆ
Zero-shot VQA evaluation of Docmatix using LLM - do we need to fine-tune?
While developing Docmatix, we found that fine-tuning Florence-2 performed well on the DocVQA task, but still scored low on the benchmark. To improve the benchmark score, we had to further fine-tune the model on the DocVQA dataset to learn the grammatical style of the benchmark. Interestingly, the human evaluators felt that the additional fine-tuning seemed to perform worse than fine-tuning on Docmatix alone, so we decided to only use the additional fine-tuned model for ablation experiments and publicly release the model fine-tuned on Docmatix alone. Although the answers generated by the model are semantically consistent with the reference answers (as shown in Figure 1), the benchmark scores are low. This raises the question: should we fine-tune the model to improve performance on existing metrics, or should we develop new metrics that are more consistent with human perception?