HF-hub - Share and discover more about AI with social posts from the community.huggingface/OpenAi
Share and discover more about AI with social posts from the community.huggingface/OpenAi
2024 Workshop on Tackling Climate Change with Machine Learning
About
Many in the ML community wish to take action on climate change, but are unsure of the pathways through which they can have the most impact. This workshop highlights work that demonstrates that, while no silver bullet, ML can be an invaluable tool in reducing greenhouse gas emissions and in helping society adapt to the effects of climate change. Climate change is a complex problem, for which action takes many forms - from theoretical advances to deployment of new technology. Many of these actions represent high-impact opportunities for real-world change, and are simultaneously interesting academic research problems.

This workshop is part of the “Tackling Climate Change with Machine Learning” workshop series, which aims to bring together those applying ML to climate change challenges and facilitate cross-pollination between ML researchers and experts in climate-relevant fields.https://www.climatechange.ai/events/neurips2024 Tackling Climate Change with Machine Learning
2024.8.8FLUX.1-DEV Canny - a Hugging Face Space by DamarJati
metadata
title: FLUX.1-DEV Canny
emoji: 🧋
colorFrom: pink
colorTo: purple
sdk: gradio
sdk_version: 4.40.0
app_file: app.py
pinned: true
short_description: FLUX Dev - Controlnet Canny
https://github.com/XLabs-AI/x-flux.git
https://huggingface.co/spaces/DamarJati/FLUX.1-DEV-Canny

#Flux #Controlnet GitHub - XLabs-AI/x-flux
Nvidia / llama3-chatqa-1.5-70b

AI models generate responses and outputs based on complex algorithms and machine learning techniques, and those responses or outputs may be inaccurate, harmful, biased or indecent. By testing this model, you assume the risk of any harm caused by any response or output of the model. Please do not upload any confidential information or personal data unless expressly permitted. Your use is logged for security purposes.:https://build.nvidia.com/nvidia/chatqa-1-5-70b/projects
Experience this model first-hand using NVIDIA AI Workbench, a unified, easy-to-use toolkit for creating, testing and customizing pretrained generative AI models and LLMs. Learn more:https://www.nvidia.com/en-us/deep-learning-ai/solutions/data-science/workbench/ NVIDIA NIM | chatqa-1-5-70b
Syn v2.0.72-Parser for Rust source code
Parser for Rust source code
github crates.io docs.rs build status

Syn is a parsing library for parsing a stream of Rust tokens into a syntax tree of Rust source code.

Currently this library is geared toward use in Rust procedural macros, but contains some APIs that may be useful more generally.

Data structures — Syn provides a complete syntax tree that can represent any valid Rust source code. The syntax tree is rooted at syn::File which represents a full source file, but there are other entry points that may be useful to procedural macros including syn::Item, syn::Expr and syn::Type.

Derives — Of particular interest to derive macros is syn::DeriveInput which is any of the three legal input items to a derive macro. An example below shows using this type in a library that can derive implementations of a user-defined trait.

Parsing — Parsing in Syn is built around parser functions with the signature fn(ParseStream) -> Result<T>. Every syntax tree node defined by Syn is individually parsable and may be used as a building block for custom syntaxes, or you may dream up your own brand new syntax without involving any of our syntax tree types.

Location information — Every token parsed by Syn is associated with a Span that tracks line and column information back to the source of that token. These spans allow a procedural macro to display detailed error messages pointing to all the right places in the user's code. There is an example of this below.

Feature flags — Functionality is aggressively feature gated so your procedural macros enable only what they need, and do not pay in compile time for all the rest.

Version requirement: Syn supports rustc 1.61 and up.
https://crates.io/crates/syn
#syn crates.io: Rust Package Registry
Flux Examples | ComfyUI_examples workflows
Regular Full Version
Files to download for the regular version
If you don’t have t5xxl_fp16.safetensors or clip_l.safetensors already in your ComfyUI/models/clip/ directory you can find them on: this link. You can use t5xxl_fp8_e4m3fn.safetensors instead for lower memory usage but the fp16 one is recommended if you have more than 32GB ram.

The VAE can be found here and should go in your ComfyUI/models/vae/ folder.

Tips if you are running out of memory:
Use the single file fp8 version that you can find by looking Below

You can set the weight_dtype in the “Load Diffusion Model” node to fp8 which will lower the memory usage by half but might reduce quality a tiny bit. You can also download the example.

Flux Dev
You can find the Flux Dev diffusion model weights here. Put the flux1-dev.safetensors file in your: ComfyUI/models/unet/ folder.

You can then load or drag the following image in ComfyUI to get the workflow:

Flux Schnell
Flux Schnell is a distilled 4 step model.

You can find the Flux Schnell diffusion model weights here this file should go in your: ComfyUI/models/unet/ folder.

You can then load or drag the following image in ComfyUI to get the workflow:

Simple to use FP8 Checkpoint version
Flux Dev
You can find an easy to use checkpoint for the Flux dev here that you can put in your: ComfyUI/models/checkpoints/ directory.

This file can be loaded with the regular “Load Checkpoint” node. Make sure you set CFG to 1.0 when using it.

Note that fp8 degrades the quality a bit so if you have the resources the official full 16 bit version is recommended.

You can then load or drag the following image in ComfyUI to get the workflow:
Flux Schnell
For Flux schnell you can get the checkpoint here that you can put in your: ComfyUI/models/checkpoints/ directory.

You can then load or drag the following image in ComfyUI to get the workflow: https://comfyanonymous.github.io/ComfyUI_examples/flux/ #Flux
safetensors
v0.4.4
Provides functions to read and write safetensors which aim to be safer than their PyTorch counterpart. The format is 8 bytes which is an unsized int, being the size of a JSON header, the JSON header refers the dtype the shape and data_offsets which are the offsets for the values in the rest of the file.
Installation
Pip
You can install safetensors via the pip manager:

pip install safetensors
From source
For the sources, you need Rust

# Install Rust
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
# Make sure it's up to date and using stable channel
rustup update
git clone https://github.com/huggingface/safetensors
cd safetensors/bindings/python
pip install setuptools_rust
pip install -e .
Getting started
import torch
from safetensors import safe_open
from safetensors.torch import save_file

tensors = {
"weight1": torch.zeros((1024, 1024)),
"weight2": torch.zeros((1024, 1024))
}
save_file(tensors, "model.safetensors")

tensors = {}
with safe_open("model.safetensors", framework="pt", device="cpu") as f:
for key in f.keys():
tensors[key] = f.get_tensor(key)
Python documentation
#tensorflow #pytorch #huggingface #tensors #safetensors GitHub - huggingface/safetensors: Simple, safe way to store and distribute tensors
InternLM team for shipping such brilliant model checkpoints!

Let's gooo! Intern LM 2.5 20B with Apache 2.0 license, up-to 1M context window & trained on copious amounts of synthetic data! ⚡️

> Beats Gemma 27B IT; MMLU: 73.5, MATH: 64.7
> Up-to 20% increase in the reasoning tasks from last iteration
> Support function calling and tool use
> Base & Instruct models released
> Along with the 20B they release 1.8B and 7B (both looking incredibly strong)
> Uses the same architecture as InternLM2
> Integrated with Transformers (remote code) 🤗

> Interesting bit: they use some form of iterative process to generate synthetic data, train and improve (would love to know more about this)https://huggingface.co/collections/internlm/internlm25-66853f32717072d17581bc13 InternLM2.5 - a internlm Collection
Just released: Shining Valiant 2 for Llama 3.1 8b! 2024

- the first SV at 8b size, using the best 8b model
- newest version of the SV dataset improves specialist knowledge and response consistency

3.1 70b will be coming but our next releases will focus on expanding the Build Tools lineup. Get ready for some open-source synthetic datasets made with 3.1 405, coming VERY soon :)
Prompting Guide
Shining Valiant 2 uses the Llama 3.1 Instruct prompt format. The example script below can be used as a starting point for general chat:

import transformers import torch

model_id = "ValiantLabs/Llama3.1-8B-ShiningValiant2"

pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto", )

messages = [ {"role": "system", "content": "You are Shining Valiant, a highly capable chat AI."}, {"role": "user", "content": "Describe the role of transformation matrices in 3D graphics."} ]
https://huggingface.co/ValiantLabs/Llama3.1-8B-ShiningValiant2

outputs = pipeline( messages, max_new_tokens=1024, )

print(outputs[0]["generated_text"][-1]) ValiantLabs/Llama3.1-8B-ShiningValiant2 · Hugging Face
Scalable Nested Optimization for Deep Learning
⚡️ My PhD thesis, “Scalable Nested Optimization for Deep Learning,” is now on arXiv! ⚡️

tl;dr: We develop various optimization tools with highlights, including:
· Making the momentum coefficient complex for adversarial games like GANs.
· Optimizing millions of hyperparameters using implicit differentiation.
· Tuning hyperparameters using hypernetworks.
· Differentiably finding bifurcations in optimization for diverse solutions.

https://arxiv.org/abs/2407.01526
Segment Anything 2 Demo-meta

SAM 2 from Meta FAIR is the first unified model for real-time, promptable object segmentation in images & videos. Using the model in our web-based demo you can segment, track and apply effects to objects in video in just a few clicks.
https://sam2.metademolab.com/ SAM 2 Demo | By Meta FAIR
Really cool to see that SF3D is trending on Huggingface. They created an amazing system for setting up the demos super easily and even extending Gradio was fairly straightforward - I’ve done a relightable viewer for it.

https://huggingface.co/spaces/stabilityai/stable-fast-3d and viewer https://pypi.org/project/gradio-litmodel3d/ Stable Fast 3D - a Hugging Face Space by stabilityai
Introducing Idefics 3 8B Llama 3, Apache 2.0 licensed VLM with enhanced Document QA capabilities! ⚡️

> Vision backbone: SigLip, Text backbone: Llama 3.1 8B
> Text + Image input w/ text output
> 8.5B parameter model
> Supports up to 10K context
> Apache 2.0 licensed
> DocVQA
link:https://huggingface.co/HuggingFaceM4/Idefics3-8B-Llama3
New multimodal release: Idefics3!

Adding vision to Llama 3.1 8b 👀
Strong improvement over April's Idefics2: +14 points on DocVQA, +6 points on MathVista 🧠
Interleave up to 60 images with text! 🤯
Comparable performance to the unreleased Llama 3.1 8B multimodal 🦾
8B-parameters: runs natively in one A100 🤏
Open license: Apache 2.0 🤗
Transparent training data: Ethically sourced datasets, built for the community 🥳Use it today with our branch of transformers: https://huggingface.co/HuggingFaceM4/Idefics3-8B-Llama3
and our open weights: HuggingFaceM4/Idefics3-8B-Llama3 · Hugging Face
Haiyan Zhang:Fortifying Teams with AI and Optimized Workflows
Last week, I had an opportunity to speak at SIGGRAPH, one of the computer graphics industry’s premier events that focuses on research, education, and skill development. I spoke with Munkhtsetseg Nandigjav, Associate Dean School of Animation & Motion at Savannah College of Art and Design, about my role as the General Manager for Gaming AI at Microsoft Gaming, our Responsible AI framework, and the ways that leaders can support their teams with AI to adapt to an ever-changing industry landscape.

Before I dive into the specifics of AI for Gaming and how I believe it can help change the industry we love for the better, I want to share a bit about my own background and why this matters so much to me.https://developer.microsoft.com/en-us/games/articles/2024/08/fortifying-teams-with-ai-and-optimized-workflows/
New strategies in fight against AI deepfakes form google
Google (NASDAQ: GOOGL) has introduced new policy updates to intensify its fight against artificial intelligence (AI)- generated content portraying individuals in explicit contexts without their permission.

In a statement, the tech giant disclosed that it will demote results of explicit deepfakes in Google Search to protect victims from bad actors amid a spike in offensive incidents. Google says the latest tools against deepfakes are an improvement on its existing policies with the most drastic change being the ease of filing complaints.

While victims have always enjoyed the right to request takedowns of non-consensual fake content from Google Search, the latest improvements allow for easy reporting of offensive websites. Google’s statement disclosed that the company will remove duplicates of the derogatory content on the web, building on its experiments with other illicit content.

“These efforts are designed to give people added peace of mind, especially if they’re concerned about similar content about them popping up in the future,” read the statement.

The second weapon in Google’s arsenal against deepfakes is an improvement in the Search ranking system. Google believes its decision to build systems to rank quality information at the top of Search may be “the best protection against harmful content.”

Going forward, the search giant unveiled plans to push AI-generated NSFW (not safe for work) content lower on its rankings to stifle its distribution. For searches involving specific names, Google says it will promote high-quality, non-explicit content to drown out the exposure to AI-generated deepfakes.

There are plans to outrightly demote websites that have a slew of reports against them for AI deepfakes, smothering their circulation and distribution from the source.

The combination of the features is poised to reduce incidents by up to 70%, but the company notes that the fight is far from finished. For now, Google continues to grapple with deepfakes that are consensual from those made without the approval of an individual, as search engines are unable to make the distinction.

“These changes are major updates to our protections on Search, but there’s more work to do to address this issue, and we’ll keep developing new solutions to help people affected by this content,” said Google.https://coingeek.com/google-unveils-new-strategies-in-fight-against-ai-deepfakes/
UCSC-VLAA/MedTrinity-25M A Large-scale Multimodal Dataset
https://huggingface.co/papers/2408.02900
MedTrinity--25M is a large-scale multimodal dataset in the field of medicine.

Key Highlights
Dataset size and coverage: Covers more than 25 million images from 10 modalities with multi-granular annotations for more than 65 diseases.
Richness of annotations: Contains global textual information such as disease/lesion type, modality, region-specific descriptions and inter-regional relations, as well as detailed local annotations of regions of interest (ROIs) such as bounding boxes, segmentation masks.

Innovative data generation: Developed the first automated pipeline to extend multimodal data by generating multi-granular visual and textual annotations (in the form of image-ROI-description triplets) without image-text pairs.
Data collection and processing: Collected and preprocessed data from more than 90 different sources, and identified ROIs associated with abnormal regions using domain-specific expert models.