Share and discover more about AI with social posts from the community.huggingface/OpenAi
How to install elastic plugin?
User Agent Processor Plugin
Install the plugin: sudo /usr/share/elasticsearch/bin/elasticsearch-plugin install ingest-user-agent.
Restart Elasticsearch: sudo systemctl restart elasticsearch.
Confirm the plugin is installed: GET /_cat/plugins.
How to install extensions in Ubuntu terminal?
Step 1: Install the Browser Add-on. Install the official browser extension first. ...
Step 2: Install 'Chrome GNOME Shell' package. Step two is to install the native connector package that lets the browser extension communicate with the GNOME Shell desktop. ...
Step 3: Install Extensions.
Nov 14, 2023
To manually add a plugin to your WordPress website:
Download the desired plugin as a . ...
From your WordPress dashboard, choose Plugins > Add New.
Click Upload Plugin at the top of the page.
Click Choose File, locate the plugin . ...
After the installation is complete, click Activate Plugin.
Jan 24, 2022
How to install plugins in terminal?
To install the plug-ins:
Click Start, and enter cmd in the Search box. Command Prompt appears.
Click Run as administrator.
Navigate to the Enterprise Client installation path. For example, C:\Program Files (x86)\Automation Anywhere\Enterprise\Client.
How to install elastic on Ubuntu server?
Elasticsearch Tutorial: How to Install Elasticsearch on Ubuntu
1.Step 1: Update Your Ubuntu. ...
2.Step 2: Install Java. ...
3.Step 3: Download Elasticsearch. ...
4.Step 4: Install Elasticsearch Ubuntu and Configure. ...
5.Step 5: Start Elasticsearch and Test It. ...
6.Step 6: Secure Elasticsearch on Ubuntu.
How to install plugin in Ubuntu?
1.How do I add plugins in Chrome in Ubuntu?
2.Method 1: Download or Install Google Chrome on Ubuntu.
3.Method 2: Access the Google Chrome Web Store.
4.Method 3: Chrome Search and Select the Plugin option.
5.Method 4: Download or Install the Plugin extension.
6.Method 5: Manage all the internal Plugins.
7.Conclusion.
How to install stm32 IDE in Ubuntu?
Running on Ubuntu, this comprehensive integrated development environment (IDE) offers everything you need for software development.
Open the Terminal. ...
Update and Upgrade. ...
Download STM32Cube. ...
Install Java Runtime Environment. ...
Extract and Install STM32Cube. ...
Launch STM32Cube.
How to install zsh plugins on Ubuntu?
Step 2: Install Zsh. To install Zsh, enter the following command: sudo apt install zsh.
Step 3: Set Zsh as Your Default Shell. Once installed, you can set Zsh as your default shell with this command: chsh -s $(which zsh) ...
Step 4: Install Oh My Zsh (Optional, but Recommended) ...
Step 5: Customize Zsh with Themes and Plugins.
how to install coolify on etc ubuntu?
What is Coolify?
Before we get our hands dirty, let's understand what Coolify is. Coolify is an open-source, self-hostable platform that allows you to deploy your web apps, static sites, and databases directly to your servers. It's like having your own Heroku but with the freedom to control every aspect of the infrastructure.

Why Ubuntu?
Ubuntu is known for its stability and widespread support, making it a favorite among developers for hosting applications. It's also well-documented and easy to use, providing a solid foundation for our Coolify installation.

Prerequisites
Before we begin, ensure you have the following:
- An Ubuntu server (20.04 LTS recommended)
- SSH access to your server
- Basic knowledge of the Linux command line

Step 1: Update Your Server
First things first, let’s make sure your Ubuntu server is up-to-date. Connect to your server via SSH and run the following commands:

sudo apt update
sudo apt upgrade -y
This will fetch the latest versions of the packages and upgrade them.

Step 2: Install Docker
Coolify runs on Docker, so our next step is to install Docker on your Ubuntu server. Execute the following commands:

sudo apt install apt-transport-https ca-certificates curl software-properties-common -y
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo apt update
sudo apt install docker-ce -y
To ensure Docker is installed correctly, run:

sudo systemctl status docker
You should see Docker running as a service.

Step 3: Install Docker Compose

Although Coolify uses its own version of Docker Compose, it’s good practice to have the official version installed:

sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
Verify the installation with:

docker-compose --version
Step 4: Install Coolify
Now, we’re ready to install Coolify. Clone the Coolify repository and run the installation script:

git clone https://github.com/coollabsio/coolify.git
cd coolify/scripts
./install.sh
Follow the on-screen instructions to complete the setup.

Congratulations! You have successfully set up the environment necessary for running Coolify on your Ubuntu server. In the next part, i will cover how to configure Coolify, secure your setup, and deploy your first application.

Stay tuned, and happy deploying! GitHub - coollabsio/coolify: An open-source & self-hostable Heroku / Netlify / Vercel alternative.
Введение в библиотеку Transformers и платформу Hugging Face
Исходники: https://github.com/huggingface/transformers
Документация: https://huggingface.co/docs/transformers/main/en/index

Платформа Hugging Face это коллекция готовых современных предварительно обученных Deep Learning моделей. А библиотека Transformers предоставляет инструменты и интерфейсы для их простой загрузки и использования. Это позволяет вам экономить время и ресурсы, необходимые для обучения моделей с нуля.

Модели решают весьма разнообразный спектр задач:

NLP: classification, NER, question answering, language modeling, summarization, translation, multiple choice, text generation.

CV: classification, object detection,segmentation.

Audio: classification, automatic speech recognition.

Multimodal: table question answering, optical character recognition, information extraction from scanned documents, video classification, visual question answering.

Reinforcement Learning

Time Series

Одна и та же задача может решаться различными архитектурами и их список впечатляет - более 150 на текущий момент. Из наиболее известных: Vision Transformer (ViT), T5, ResNet, BERT, GPT2. На этих архитектурах обучены более 60 000 моделей. GitHub - huggingface/transformers: 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
Как использовать агенты Hugging Face для решения задач NLP
Hugging Face  —  ИИ-сообщество с открытым исходным кодом для практиков машинного обучения  —  недавно интегрировало концепцию инструментов и агентов в свою популярную библиотеку Transformers. Если вы уже использовали Hugging Face для решения задач обработки естественного языка (NLP), компьютерного зрения и работой над аудио/речью, вам будет интересно узнать о дополнительных возможностях Transformers.

Агенты значительно улучшают пользовательский опыт. Допустим, вы хотите использовать модель из Hugging Face Hub для создания переводов с английского на французский. В этом случае вам пришлось бы провести исследование, чтобы найти хорошую модель, затем выяснить, как именно ее использовать, и после написать для нее код и создать перевод.

Но что, если в вашем распоряжении был бы эксперт на платформе Hugging Face, который уже освоил все это? Вы бы просто сказали эксперту, что хотите перевести предложение с английского на французский, а он позаботился бы о том, чтобы найти хорошую модель, написать код задачи и выдать результат  —  и сделал бы все гораздо быстрее, чем мы с вами.

Именно для этого и предназначены агенты. Вы описываете агенту задачу на простом английском языке, а агент изучает имеющиеся в его арсенале инструменты и выполняет вашу задачу. Это очень похоже на то, как если бы вы попросили ChatGPT перевести предложение, а ChatGPT позаботился бы обо всем остальном. Но вместо того чтобы ограничиваться несколькими моделями, которые использует ChatGPT (т. е. моделями Open AI, такими как GPT 3.5 и GPT-4), агенты имеют доступ ко многим моделям, доступным на Hugging Face.

Теперь, разобравшись с тем, как работают агенты и инструменты, посмотрим, как реализовать их возможности.
Google launches first AI-powered Android update and new Pixel 9 phones
Google
on Tuesday announced new artificial intelligence features that are coming to Android devices. The move to bring its Gemini AI assistant to supported devices shows again how Google aims to put its AI in front of consumers before Apple
, which will launch its AI on iPhones, Macs and iPads later this year.

Google doesn’t make a lot of money from its hardware business but the latest Android features could help drive new revenue through the company’s Gemini AI subscription program.

“We’ve completely rebuilt the assistant experience with Gemini, so you can speak to it naturally the way you would with another person,” said Android Ecosystem President Sameer Samat in a Tuesday blog post. “It can understand your intent, follow your train of thought and complete complex tasks.”

“Starting today, you can bring up Gemini’s overlay on top of the app you’re using to ask questions about what’s on your screen,” Samat wrote. It will be available on hundreds of phone models from dozens of device makers, according to Google.

Google previously had some AI features in Android, but this is the first year it’s heavily emphasizing new capabilities powered by a large AI language model installed on devices.

One example the company provided involved a user uploading a photo of a concert list and asking Gemini to see if their calendar is free, after which Gemini checks Google Calendar. If the user has availability in their schedule, Gemini offers to create a reminder to check ticket prices later that night.
https://www.cnbc.com/2024/08/13/google-pixel-9-phones-first-ai-powered-android-update-announced.html Google launches first AI-powered Android update and new Pixel 9 phones
Gemini makes your mobile device a powerful AI assistant
Aug 13, 2024

5 min read

Gemini Live is available today to Advanced subscribers, along with conversational overlay on Android and even more connected apps.
For years, we’ve relied on digital assistants to set timers, play music or control our smart homes. This technology has made it easier to get things done and saved valuable minutes each day.

Now with generative AI, we can provide a whole new type of help for complex tasks that can save you hours. With Gemini, we’re reimagining what it means for a personal assistant to be truly helpful. Gemini is evolving to provide AI-powered mobile assistance that will offer a new level of help — all while being more natural, conversational and intuitive.

Learn more about the new Gemini features, which will be available on both Android and iOS.

Rolling out today: Gemini Live
Gemini Live is a mobile conversational experience that lets you have free-flowing conversations with Gemini. Want to brainstorm potential jobs that are well-suited to your skillset or degree? Go Live with Gemini and ask about them. You can even interrupt mid-response to dive deeper on a particular point, or pause a conversation and come back to it later. It’s like having a sidekick in your pocket who you can chat with about new ideas or practice with for an important conversation.

Gemini Live is also available hands-free: You can keep talking with the Gemini app in the background or when your phone is locked, so you can carry on your conversation on the go, just like you might on a regular phone call. Gemini Live begins rolling out today in English to our Gemini Advanced subscribers on Android phones, and in the coming weeks will expand to iOS and more languages.

To make speaking to Gemini feel even more natural, we’re introducing 10 new voices to choose from, so you can pick the tone and style that works best for you.
https://blog.google/products/gemini/made-by-google-gemini-ai-updates/ Gemini makes your mobile device a powerful AI assistant
Context Caching with Gemini 1.5 Flash
Google recently released a new feature called context-caching which is available via the Gemini APIs through the Gemini 1.5 Pro and Gemini 1.5 Flash models. This guide provides a basic example of how to use context-caching with Gemini 1.5 Flash.


https://youtu.be/987Pd89EDPs?si=j43isgNb0uwH5AeI

The Use Case: Analyzing a Year's Worth of ML Papers
The guide demonstrates how you can use context caching to analyze the summaries of all the ML papers we've documented over the past year. We store these summaries in a text file, which can now be fed to the Gemini 1.5 Flash model and query efficiently.
https://www.promptingguide.ai/applications/context-caching
Function Calling with GPT-4
As a basic example, let's say we asked the model to check the weather in a given location.

The LLM alone would not be able to respond to this request because it has been trained on a dataset with a cutoff point. The way to solve this is to combine the LLM with an external tool. You can leverage the function calling capabilities of the model to determine an external function to call along with its arguments and then have it return a final response. Below is a simple example of how you can achieve this using the OpenAI APIs.

Let's say a user is asking the following question to the model:

What is the weather like in London?

To handle this request using function calling, the first step is to define a weather function or set of functions that you will be passing as part of the OpenAI API request:

tools = [
{
"type": "function",
"function": {
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA",
},
"unit": {
"type": "string",
"enum": ["celsius", "fahrenheit"]},
},
"required": ["location"],
},
},
}
]

The get_current_weather function returns the current weather in a given location. When you pass this function definition as part of the request, it doesn't actually executes a function, it just returns a JSON object containing the arguments needed to call the function. Here are some code snippets of how to achieve this.https://www.promptingguide.ai/applications/function_calling
Function Calling with LLMs
Getting Started with Function Calling

Function calling is the ability to reliably connect LLMs to external tools to enable effective tool usage and interaction with external APIs.

LLMs like GPT-4 and GPT-3.5 have been fine-tuned to detect when a function needs to be called and then output JSON containing arguments to call the function. The functions that are being called by function calling will act as tools in your AI application and you can define more than one in a single request.

Function calling is an important ability for building LLM-powered chatbots or agents that need to retrieve context for an LLM or interact with external tools by converting natural language into API calls.

Functional calling enables developers to create:

conversational agents that can efficiently use external tools to answer questions. For example, the query "What is the weather like in Belize?" will be converted to a function call such as get_current_weather(location: string, unit: 'celsius' | 'fahrenheit')
LLM-powered solutions for extracting and tagging data (e.g., extracting people names from a Wikipedia article)
applications that can help convert natural language to API calls or valid database queries
conversational knowledge retrieval engines that interact with a knowledge base
In this guide, we demonstrate how to prompt models like GPT-4 and open-source models to perform function calling for different use cases.https://www.promptingguide.ai/applications/function_calling
Hugging Face Space by GlidingDragonEntertainment

Let's create a simple Python app using FastAPI:

requirements.txt


fastapi
uvicorn[standard]
Hint You can also create the requirements file file directly in your browser.
app.py


from fastapi import FastAPI

app = FastAPI()

@app.get("/")
def greet_json():
return {"Hello": "World!"}
Hint You can also create the app file file directly in your browser.
Create your Dockerfile:


# Read the doc: https://huggingface.co/docs/hub/spaces-sdks-docker
# you will also find guides on how best to write your Dockerfile

FROM python:3.9

RUN useradd -m -u 1000 user
USER user
ENV PATH="/home/user/.local/bin:$PATH"

WORKDIR /app

COPY --chown=user ./requirements.txt requirements.txt
RUN pip install --no-cache-dir --upgrade -r requirements.txt

COPY --chown=user . /app
CMD ["uvicorn", "app:app", "--host", "0.0.0.0", "--port", "7860"]

https://huggingface.co/spaces/GlidingDragonEntertainment/README Docker Spaces
LATAM Out-of-Distribution Few-shot Challenge
The LATAM Out-of-Distribution Few-shot Challenge is designed to push the boundaries of machine learning in autonomous driving applications. Participants will develop models that can classify unusual or specific vehicle types from minimal training data, a crucial skill in environments with unique vehicular regulations.

Challenge Description
Participants will utilize models initially trained on the ImageNet-1K dataset. The challenge involves fine-tuning these models using only six support images in a few-shot learning setup. The task is split into two distinct groups:

Easily Distinguishable Classes: For example, tuk-tuks, which are distinct from other vehicles in their appearance and function.
Sub-Groups of Common Classes: For example, fuel-transporting trucks, which require specific recognition due to unique regulatory requirements in traffic, such as maintaining a greater distance from these vehicles.
The goal is for models to effectively recognize and classify images into these specific categories with high precision, using the provided support set.

Key Challenge Details
Initial Training Data: Models will be pre-trained on the ImageNet-1K dataset.
Few-shot Learning: Fine-tuning with only six support images.
Application Focus: Autonomous driving, with emphasis on safety and regulatory compliance for specific vehicle types.
Allowed Techniques: Techniques that address out-of-distribution samples and adversarial training are permitted, provided that there's no exposure to the target domain.
Restrictions: The use of large language models (LLM) is prohibited due to the difficulty in verifying their training data domains.https://huggingface.co/spaces/Artificio/ROAM2FewShotChallenge ROAM2FewShotChallenge - a Hugging Face Space by Artificio
Introducing FLUX LoRA the Explorer 🧭

Explore, generate and download FLUX LoRAs! 🖼 Including the popular flux-realism and the cute Frosting Lane

Come over, we're just getting started
https://huggingface.co/spaces/multimodalart/flux-lora-the-explorer FLUX LoRa the Explorer - a Hugging Face Space by multimodalart
x Polars!

Polars now supports native reading from
@huggingface
datasets. Check out our latest blog to learn more about it:
https://pola.rs/posts/polars-hugging-face/ Hugging Face x Polars