Share and discover more about AI with social posts from the community.huggingface/OpenAi
📚 Cokeep - Transform bookmarks into collaborative spaces with AI organization, summarization, and team sharing capabilities.
🎨 Crayon AI - Unleash creativity with an all-in-one AI image toolbox, with generation, editing, and optimization for all skill levels.
🖥 Tailwind Genie - Generate responsive UI designs with AI, streamlining web development using Tailwind CSS.
🤗 Video Ai Hug - Transform static photos into personalized hugging videos, bringing cherished moments to life.
📝 Postin - Supercharge your LinkedIn presence with AI-crafted posts, smart management, and engagement-boosting strategies.
📊 Metastory AI v2.2 - Enhance project management with this v2.2 update from Metastory AI that now has Jira integration, project publishing, and an improved editor for streamlined collaboration.
🔎 Beloga - Intelligently capture and seamlessly search across Notion, GDrive, notes, the internet and more simultaneously with a digital brain that’s designed to help amplify your knowledge.
Sick of feeling like a broken record, endlessly repeating instructions?

It’s time to let AI do the talking. Meet Guidde - your GPT-powered ally that transforms even the most complex tasks into crystal-clear, AI-generated video documentation at lightning speed.

Seamlessly share or embed your guides anywhere, hassle-free

Say goodbye to dry documentation and hello to beautiful guides

Reclaim precious time generating documentation 11x faster with AI

Best of all, it only takes 3 steps:

Install the free guidde Chrome extension

Click ‘Capture’ in the extension and ‘Stop’ when done

Sit back and let AI handle the rest, then share your guide
🚔 AI Police Cams - Between July and August, AI cameras used in two UK counties detected over 2,000 people not wearing seat belts on three roads, including 109 children. One case involved an unrestrained toddler sitting on a woman's lap in the front passenger seat. Not only are AI-powered cameras being used for seat belts, they’re also being used to catch litterers.
🧠 Qwen - New updates have been made to Qwen’s AI models across multiple modalities. Qwen2-VL is a new vision-language model capable of understanding high-resolution images and 20+ minute videos; Qwen2-Audio is for processing voice inputs; and Qwen-Agent, is an approach to expand 8K context models to handle 1M tokens.
📹 Wyze - A new AI-powered search feature from Wyze allows users to search through their camera footage using keywords and natural language queries. Instead of manually scrolling through recorded events, users can now search for specific objects, people, or activities like "truck," "delivery person," or even more detailed requests like "show me my cat in the backyard."
Celebrating huggingface's acquisition of huggingface.com at a high price.
sequelbox
posted an update
2 days ago
Post
499

new synthetic general chat dataset! meet Supernova, a dataset using prompts from UltraFeedback and responses from Llama 3.1 405b Instruct:
sequelbox/Supernova


new model(s) using the Supernova dataset will follow next week, along with Other Things. (One of these will be a newly updated version of Enigma, utilizing the next version of
sequelbox/Tachibana
with approximately 2x the rows!)
just published a demo for Salesforce's new Function Calling Model Salesforce/xLAM

-
Tonic/Salesforce-Xlam-7b-r

-
Tonic/On-Device-Function-Calling


just try em out, and it comes with on-deviceversion too ! cool ! 🚀
Estamos tratando de unir, aunar fuerzas y cooperar en experimentos de IA en América Latina. Te invito a unirte a nosotros en «LatinAI». La idea es compartir y organizar espacios, modelos y conjuntos de datos en español/portugués/guaraní/mapuche o ingles para el desarrollo en América Latina.
Siéntete libre de unirte a la organización : https://huggingface.co/LatinAI
---
We are trying to unite, join forces and cooperate in AI experiments in Latin America. We invite you to join us in “LatinAI”. The idea is to share and organize spaces, models and datasets in Spanish/Portuguese/Guarani/Mapuche or English for development in Latin America.
Feel free to join the organization : https://huggingface.co/LatinAI LatinAI (AI Developers from Latin America)
Just tried LitServe from the good folks at @LightningAI!

Between llama.cpp and vLLM, there is a small gap where a few large models are not deployable!

That's where LitServe comes in!

LitServe is a high-throughput serving engine for AI models built on FastAPI.

Yes, built on FastAPI. That's where the advantage and the issue lie.

It's extremely flexible and supports multi-modality and a variety of models out of the box.

But in my testing, it lags far behind in speed compared to vLLM.

Also, no OpenAI API-compatible endpoint is available as of now.

But as we move to multi-modal models and agents, this serves as a good starting point. However, it’s got to become faster...

GitHub: https://github.com/Lightning-AI/LitServe GitHub - Lightning-AI/LitServe: Lightning-fast serving engine for AI models. Flexible. Easy. Enterprise-scale.
I found this paper to be thought-provoking: "Smaller, Weaker, Yet Better: Training LLM Reasoners via Compute-Optimal Sampling" by Bansal, Hosseini, Agarwal, Tran, and Kazemi.
https://arxiv.org/abs/2408.16737
The direct implication is that smaller models could be used to create cost-effective synthetic datasets. And on that note, in the Gemma terms of use, Google explicitly claims no rights on outputs generated from those models, which means one to free to synthgen from the Gemma line. Meta's Llama 3 licence forbids synthetic generation of outputs if used to improve other models. Relevant Mistral, Qwen, and Yi models under the Apache 2.0 license are unrestricted for this purpose.
From Article 50 of the EU AI Act:

"2. Providers of AI systems, including general-purpose AI systems, generating synthetic audio, image, video or text content, shall ensure that the outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated."

How might this be put into practice?

I'm interested to know how content might be deemed as being "detectable" as artificially generated. I wonder if this will require an image be detectable as AI generated if it was copied out of the site / application it was created on?

Some sort of a watermark? LSB Stegranography? I wonder if openAI are already sneaking something like this into DALL-E images.

Some sort of hash, which allowing content to be looked up, and verified as AI generated?

Would a pop up saying "this output was generated with AI"? suffice? Any ideas? Time is on the system provider's side, at least for now, as from what I can see this doesn't come into effect until August 2026.

src: https://artificialintelligenceact.eu/article/50/
💾🧠How much VRAM will you need for training your AI model? 💾🧠
Check out this app where you convert:
Pytorch/tensorflow summary -> needed VRAM
or
Parameter count -> needed VRAM

Use it in: http://howmuchvram.com

And everything is open source! Ask for new functionalities or contribute in:
https://github.com/AlexBodner/How_Much_VRAM
If it's useful to you leave a star 🌟and share it to someone that will find the tool useful! GitHub - AlexBodner/How_Much_VRAM
Last Week in Medical AI: Top Research Papers/Models
🏅 (August 25 - August 31, 2024)

- MultiMed: Multimodal Medical Benchmark
- A Foundation model for generating chest X-ray images
- MEDSAGE: Medical Dialogue Summarization
- Knowledge Graphs for Radiology Report Generation
- Exploring Multi-modal LLMs for Chest X-ray
- Improving Clinical Note Generation
...

Check the full thread : https://x.com/OpenlifesciAI/status/1829984701324448051