Share and discover more about AI with social posts from the community.huggingface/OpenAi
just published a demo for Salesforce's new Function Calling Model Salesforce/xLAM

-
Tonic/Salesforce-Xlam-7b-r

-
Tonic/On-Device-Function-Calling


just try em out, and it comes with on-deviceversion too ! cool ! 🚀
Estamos tratando de unir, aunar fuerzas y cooperar en experimentos de IA en América Latina. Te invito a unirte a nosotros en «LatinAI». La idea es compartir y organizar espacios, modelos y conjuntos de datos en español/portugués/guaraní/mapuche o ingles para el desarrollo en América Latina.
Siéntete libre de unirte a la organización : https://huggingface.co/LatinAI
---
We are trying to unite, join forces and cooperate in AI experiments in Latin America. We invite you to join us in “LatinAI”. The idea is to share and organize spaces, models and datasets in Spanish/Portuguese/Guarani/Mapuche or English for development in Latin America.
Feel free to join the organization : https://huggingface.co/LatinAI LatinAI (AI Developers from Latin America)
Just tried LitServe from the good folks at @LightningAI!

Between llama.cpp and vLLM, there is a small gap where a few large models are not deployable!

That's where LitServe comes in!

LitServe is a high-throughput serving engine for AI models built on FastAPI.

Yes, built on FastAPI. That's where the advantage and the issue lie.

It's extremely flexible and supports multi-modality and a variety of models out of the box.

But in my testing, it lags far behind in speed compared to vLLM.

Also, no OpenAI API-compatible endpoint is available as of now.

But as we move to multi-modal models and agents, this serves as a good starting point. However, it’s got to become faster...

GitHub: https://github.com/Lightning-AI/LitServe GitHub - Lightning-AI/LitServe: Lightning-fast serving engine for any AI model of any size. Flexible. Easy. Enterprise-scale.
I found this paper to be thought-provoking: "Smaller, Weaker, Yet Better: Training LLM Reasoners via Compute-Optimal Sampling" by Bansal, Hosseini, Agarwal, Tran, and Kazemi.
https://arxiv.org/abs/2408.16737
The direct implication is that smaller models could be used to create cost-effective synthetic datasets. And on that note, in the Gemma terms of use, Google explicitly claims no rights on outputs generated from those models, which means one to free to synthgen from the Gemma line. Meta's Llama 3 licence forbids synthetic generation of outputs if used to improve other models. Relevant Mistral, Qwen, and Yi models under the Apache 2.0 license are unrestricted for this purpose.
From Article 50 of the EU AI Act:

"2. Providers of AI systems, including general-purpose AI systems, generating synthetic audio, image, video or text content, shall ensure that the outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated."

How might this be put into practice?

I'm interested to know how content might be deemed as being "detectable" as artificially generated. I wonder if this will require an image be detectable as AI generated if it was copied out of the site / application it was created on?

Some sort of a watermark? LSB Stegranography? I wonder if openAI are already sneaking something like this into DALL-E images.

Some sort of hash, which allowing content to be looked up, and verified as AI generated?

Would a pop up saying "this output was generated with AI"? suffice? Any ideas? Time is on the system provider's side, at least for now, as from what I can see this doesn't come into effect until August 2026.

src: https://artificialintelligenceact.eu/article/50/
💾🧠How much VRAM will you need for training your AI model? 💾🧠
Check out this app where you convert:
Pytorch/tensorflow summary -> needed VRAM
or
Parameter count -> needed VRAM

Use it in: http://howmuchvram.com

And everything is open source! Ask for new functionalities or contribute in:
https://github.com/AlexBodner/How_Much_VRAM
If it's useful to you leave a star 🌟and share it to someone that will find the tool useful! GitHub - AlexBodner/How_Much_VRAM
Last Week in Medical AI: Top Research Papers/Models
🏅 (August 25 - August 31, 2024)

- MultiMed: Multimodal Medical Benchmark
- A Foundation model for generating chest X-ray images
- MEDSAGE: Medical Dialogue Summarization
- Knowledge Graphs for Radiology Report Generation
- Exploring Multi-modal LLMs for Chest X-ray
- Improving Clinical Note Generation
...

Check the full thread : https://x.com/OpenlifesciAI/status/1829984701324448051
Understanding the json format response with HF's Serverless Inference API 🤗

As it stands, there seems to be an inconsistency with the OpenAI documentation on the question of implementing the JSON response format using the InferenceClient completion API.

After investigating the InferenceClient source code, I share the official solution using a JSON Schema. This consolidates the structure of the response and simplifies parsing as part of an automated process for extracting metadata, information:

from huggingface_hub import InferenceClient

client = InferenceClient("meta-llama/Meta-Llama-3-70B-Instruct")

messages = [
{
"role": "user",
"content": "I saw a puppy a cat and a raccoon during my bike ride in the park. What did I saw and when?",
},
]

response_format = {
"type": "json",
"value": {
"properties": {
"location": {"type": "string"},
"activity": {"type": "string"},
"animals_seen": {"type": "integer", "minimum": 1, "maximum": 5},
"animals": {"type": "array", "items": {"type": "string"}},
},
"required": ["location", "activity", "animals_seen", "animals"],
},
}

response = client.chat_completion(
messages=messages,
response_format=response_format,
max_tokens=500,
)

print(response.choices[0].message.content)

As a reminder, json mode is activated with the OpenAI client as follows:

response = client.chat.completions.create(
model="gpt-3.5-turbo-0125",
messages=[...],
response_format={"type": "json_object"}
)

One question remains unanswered, however, and will perhaps be answered by the community: it seems that an incompatibility persists for list of dictionaries generation, and currently, the production of simple dictionaries seems to be the only functional option.
Amazing day. AWPortrait-FL finally here!
🦖 AWPortrait-FL is finetuned on FLUX.1-dev using the training set of AWPortrait-XL and nearly 2,000 fashion photography photos with extremely high aesthetic quality.

🤗Model:
Shakker-Labs/AWPortrait-FL


🙇Demo:
vilarin/AWPortrait-FL
Check Your Redirects and Status Codes
Quickly analyze 301, 302 redirects and other HTTP status codes to optimize your website's performance and SEO.
https://link.zhihu.com/?target=https%3A%2F%2Fredirect-checker.girff.com%2F
The best-selling SaaS affiliate
programs that make you money.
Monetize your content through affiliate programs from top SaaS companies and generate a side income, fast.

https://www.nodeseek.com/jump?to=https%3A%2F%2Fsaasinfopro.com%2F
Check Your Redirects and Status Codes
Quickly analyze 301, 302 redirects and other HTTP status codes to optimize your website's performance and SEO.
https://www.nodeseek.com/jump?to=https%3A%2F%2Fredirect-checker.girff.com%2F
AI Video THUDM/CogVideoX-5b
CogVideoX is an open-source version of the video generation model originating from QingYing. The table below displays the list of video generation models we currently offer, along with their foundational information.


When testing using the diffusers library, all optimizations provided by the diffusers library were enabled. This solution has not been tested for actual VRAM/memory usage on devices other than NVIDIA A100 / H100. Generally, this solution can be adapted to all devices with NVIDIA Ampere architecture and above. If the optimizations are disabled, VRAM usage will increase significantly, with peak VRAM usage being about 3 times higher than the table shows. However, speed will increase by 3-4 times. You can selectively disable some optimizations, including:
pipe.enable_model_cpu_offload()
pipe.enable_sequential_cpu_offload()
pipe.vae.enable_slicing()
pipe.vae.enable_tiling()

When performing multi-GPU inference, the enable_model_cpu_offload() optimization needs to be disabled.
Using INT8 models will reduce inference speed. This is to ensure that GPUs with lower VRAM can perform inference normally while maintaining minimal video quality loss, though inference speed will decrease significantly.
The 2B model is trained with FP16 precision, and the 5B model is trained with BF16 precision. We recommend using the precision the model was trained with for inference.
PytorchAO and Optimum-quanto can be used to quantize the text encoder, Transformer, and VAE modules to reduce CogVideoX's memory requirements. This makes it possible to run the model on a free T4 Colab or GPUs with smaller VRAM! It is also worth noting that TorchAO quantization is fully compatible with torch.compile, which can significantly improve inference speed. FP8 precision must be used on devices with NVIDIA H100 or above, which requires installing the torch, torchao, diffusers, and accelerate Python packages from source. CUDA 12.4 is recommended.
The inference speed test also used the above VRAM optimization scheme. Without VRAM optimization, inference speed increases by about 10%. Only the diffusers version of the model supports quantization.
The model only supports English input; other languages can be translated into English during refinement by a large model.
https://huggingface.co/THUDM/CogVideoX-5b THUDM/CogVideoX-5b · Hugging Face
Automated web scraping with playwright is becoming easier by the day. Now, using ollama tool calling, its possible to perform very high accuracy web scraping (in some cases 100% accurate) through just asking an LLM to scrape the content for you.

This can be completed in a multistep process similar to cohere's platform. If you have tried the cohere playground with web scraping, this will feel very similar. In my experience, the Llama 3.1 version is much better due to the larger context window. Both tools are great, but the difference is the ollama + playwright version is completely controlled by you.

All you need to do is wrap your scraper in a function:


async def query_web_scraper(url: str) -> dict:
scraper = WebScraper(headless=False)
return await scraper.query_page_content(url)


and then make your request:


# First API call: Send the query and function description to the model
response = ollama.chat(
model=model,
messages=messages,
tools=[
{
'type': 'function',
'function': {
'name': 'query_web_scraper',
'description': 'Scrapes the content of a web page and returns the structured JSON object with titles, articles, and associated links.',
'parameters': {
'type': 'object',
'properties': {
'url': {
'type': 'string',
'description': 'The URL of the web page to scrape.',
},
},
'required': ['url'],
},
},
},
]
)


To learn more:
Github w/ Playground: https://github.com/tdolan21/tool-calling-playground/blob/main/notebooks/ollama-playwright-web-scraping.ipynb
Complete Guide: https://medium.com/@tdolan21/building-an-llm-powered-web-scraper-with-ollama-and-playwright-6274d5d938b5
Kwai-Kolors/Kolors-Virtual-Try-On
Thought this was an interesting graphic from the EAGLE blog post. It made me wonder if certain sampling methods have been shown to work better for certain tasks.

Does anyone know of any work looking at trends in the output token probability distribution by task type? (or similar)

Source: https://sites.google.com/view/eagle-llm
Continuing my streak by releasing the Wikireading dataset: a large collection of scraped non-fiction books predominantly in Russian language.
its5Q/wikireading


Here's the highlights:
- ~7B tokens, or ~28B characters, making it a great candidate for use in pretraining
- Contains non-fiction works from many knowledge domains
- Includes both the original HTML and extracted text of book chapters
The word 'Lead' has three definitions. When an LLM model tokenizes this word, it is always the same token. Imagine being able to put any particular embedding at any particular time into a 'Quantum State'. When an Embedding is in a Quantum State, the word token could have up to 3 different meanings (x1, x2, x3). The Quantum State gets collapsed based on the individual context surrounding the word. 'Jill lead Joy to the store' would collapse to x1. 'Jill and Joy stumbled upon a pile of lead' would collapse to x3. Very simple, right? This method produces OFF THE CHARTS results:


https://www.youtube.com/watch?v=tuQI6A-EOqE