HF-hub - Share and discover more about AI with social posts from the community.huggingface/OpenAi
Share and discover more about AI with social posts from the community.huggingface/OpenAi
Jetson Nano B01 is a single-board computer developed by NVIDIA, designed for AI and robotics applications. It is an upgrade from the original Jetson Nano, featuring improved performance and additional features. Here are some key specifications and features of the Jetson Nano B01:

Key Specifications:
GPU: 128-core NVIDIA Maxwell architecture

CPU: Quad-core ARM Cortex-A57 MPCore processor

Memory: 4 GB 64-bit LPDDR4https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-nano/product-development/

Storage: microSD card slot (for operating system and data storage)

Connectivity:

Gigabit Ethernet

Wi-Fi (optional module)

Bluetooth (optional module)

4x USB 3.0 ports

HDMI 2.0 or DisplayPort 1.2

MIPI CSI-2 camera connector

GPIO pins

Power: 5V DC, 4A (20W)

Features:
AI and Machine Learning: Supports NVIDIA JetPack SDK, which includes CUDA, cuDNN, and TensorRT for accelerating AI workloads.

Robotics and IoT: Suitable for developing robotics projects and IoT devices with AI capabilities.

Development Environment: Compatible with various development environments, including JetPack SDK, Ubuntu, and NVIDIA SDK Manager.

Expansion Capabilities: Supports various expansion modules and carrier boards through its GPIO pins and connectors.

Use Cases:
AI and Machine Learning Projects: Training and deploying deep learning models for image recognition, object detection, and more.

Robotics: Building and programming robots with advanced AI capabilities.

Smart Cameras: Developing smart camera systems with real-time video analytics.

IoT Devices: Creating IoT devices with embedded AI processing.

Getting Started:
Setup: Install the JetPack SDK and required libraries using the NVIDIA SDK Manager.

Development: Use Python, C++, or other supported languages to develop applications.

Deployment: Deploy your applications on the Jetson Nano B01 and connect peripherals as needed.

The Jetson Nano B01 is a powerful and versatile platform for developers and hobbyists looking to explore and implement AI and robotics projects. Its compact size, robust performance, and extensive support for AI libraries make it an excellent choice for a wide range of applications.
This text mainly introduces the detailed process of setting up and managing Coolify on a server, including server configuration, installing Coolify, setting up user account security, deploying projects (such as static websites and Next.js applications), configuring domain names and redirects, selecting proxy servers, and handling related security and optimization settings.
Highlights
Server Configuration: Details the selection and configuration requirements of the server, such as CPU, memory, and storage, and also introduces how to set up SSH keys, firewalls, and cloud configuration.
Coolify Installation: Emphasizes the steps to install Coolify, including obtaining the installation script, running as the root user, and basic settings after installation.
User Account Security: Covers security measures such as setting user passwords and enabling two-factor authentication.
Project Deployment: Introduces the deployment process of static websites and Next.js applications, including resource selection, environment settings, and build package selection.
Domain Name and Redirection Configuration: Explains how to set up DNS records, specify domain names in Coolify, and configure proxy servers for https and redirection.
Proxy Server Selection: Compares the characteristics and configuration methods of two proxy servers, Caddy and Traffic.
Complex application deployment: Using a Next.js application as an example, this article illustrates the advantages and customizability of one-click deployment using Nyx packs.https://www.youtube.com/watch?v=taJlPG82Ucw
What are Elasticsearch Plugins?
Elasticsearch is an open source, scalable search engine. Although Elasticsearch supports a large number of features out-of-the-box, it can also be extended with a variety of plugins to provide advanced analytics and process different data types.

This guide will show to how install the following Elasticsearch plugins and interact with them using the Elasticsearch API:

ingest-attachment: allows Elasticsearch to index and search base64-encoded documents in formats such as RTF, PDF, and PPT.
analysis-phonetic: identifies search results that sound similar to the search term.
ingest-geoip: adds location information to indexed documents based on any IP addresses within the document.
ingest-user-agent: parses the User-Agent header of HTTP requests to provide identifying information about the client that sent each request.https://www.linode.com/docs/guides/a-guide-to-elasticsearch-plugins/
How to install elastic plugin?
User Agent Processor Plugin
Install the plugin: sudo /usr/share/elasticsearch/bin/elasticsearch-plugin install ingest-user-agent.
Restart Elasticsearch: sudo systemctl restart elasticsearch.
Confirm the plugin is installed: GET /_cat/plugins.
How to install extensions in Ubuntu terminal?
Step 1: Install the Browser Add-on. Install the official browser extension first. ...
Step 2: Install 'Chrome GNOME Shell' package. Step two is to install the native connector package that lets the browser extension communicate with the GNOME Shell desktop. ...
Step 3: Install Extensions.
Nov 14, 2023
To manually add a plugin to your WordPress website:
Download the desired plugin as a . ...
From your WordPress dashboard, choose Plugins > Add New.
Click Upload Plugin at the top of the page.
Click Choose File, locate the plugin . ...
After the installation is complete, click Activate Plugin.
Jan 24, 2022
How to install plugins in terminal?
To install the plug-ins:
Click Start, and enter cmd in the Search box. Command Prompt appears.
Click Run as administrator.
Navigate to the Enterprise Client installation path. For example, C:\Program Files (x86)\Automation Anywhere\Enterprise\Client.
How to install elastic on Ubuntu server?
Elasticsearch Tutorial: How to Install Elasticsearch on Ubuntu
1.Step 1: Update Your Ubuntu. ...
2.Step 2: Install Java. ...
3.Step 3: Download Elasticsearch. ...
4.Step 4: Install Elasticsearch Ubuntu and Configure. ...
5.Step 5: Start Elasticsearch and Test It. ...
6.Step 6: Secure Elasticsearch on Ubuntu.
How to install plugin in Ubuntu?
1.How do I add plugins in Chrome in Ubuntu?
2.Method 1: Download or Install Google Chrome on Ubuntu.
3.Method 2: Access the Google Chrome Web Store.
4.Method 3: Chrome Search and Select the Plugin option.
5.Method 4: Download or Install the Plugin extension.
6.Method 5: Manage all the internal Plugins.
7.Conclusion.
How to install stm32 IDE in Ubuntu?
Running on Ubuntu, this comprehensive integrated development environment (IDE) offers everything you need for software development.
Open the Terminal. ...
Update and Upgrade. ...
Download STM32Cube. ...
Install Java Runtime Environment. ...
Extract and Install STM32Cube. ...
Launch STM32Cube.
How to install zsh plugins on Ubuntu?
Step 2: Install Zsh. To install Zsh, enter the following command: sudo apt install zsh.
Step 3: Set Zsh as Your Default Shell. Once installed, you can set Zsh as your default shell with this command: chsh -s $(which zsh) ...
Step 4: Install Oh My Zsh (Optional, but Recommended) ...
Step 5: Customize Zsh with Themes and Plugins.
how to install coolify on etc ubuntu?
What is Coolify?
Before we get our hands dirty, let's understand what Coolify is. Coolify is an open-source, self-hostable platform that allows you to deploy your web apps, static sites, and databases directly to your servers. It's like having your own Heroku but with the freedom to control every aspect of the infrastructure.

Why Ubuntu?
Ubuntu is known for its stability and widespread support, making it a favorite among developers for hosting applications. It's also well-documented and easy to use, providing a solid foundation for our Coolify installation.

Prerequisites
Before we begin, ensure you have the following:
- An Ubuntu server (20.04 LTS recommended)
- SSH access to your server
- Basic knowledge of the Linux command line

Step 1: Update Your Server
First things first, let’s make sure your Ubuntu server is up-to-date. Connect to your server via SSH and run the following commands:

sudo apt update
sudo apt upgrade -y
This will fetch the latest versions of the packages and upgrade them.

Step 2: Install Docker
Coolify runs on Docker, so our next step is to install Docker on your Ubuntu server. Execute the following commands:

sudo apt install apt-transport-https ca-certificates curl software-properties-common -y
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo apt update
sudo apt install docker-ce -y
To ensure Docker is installed correctly, run:

sudo systemctl status docker
You should see Docker running as a service.

Step 3: Install Docker Compose

Although Coolify uses its own version of Docker Compose, it’s good practice to have the official version installed:

sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
Verify the installation with:

docker-compose --version
Step 4: Install Coolify
Now, we’re ready to install Coolify. Clone the Coolify repository and run the installation script:

git clone https://github.com/coollabsio/coolify.git
cd coolify/scripts
./install.sh
Follow the on-screen instructions to complete the setup.

Congratulations! You have successfully set up the environment necessary for running Coolify on your Ubuntu server. In the next part, i will cover how to configure Coolify, secure your setup, and deploy your first application.

Stay tuned, and happy deploying! GitHub - coollabsio/coolify: An open-source & self-hostable Heroku / Netlify / Vercel alternative.
Введение в библиотеку Transformers и платформу Hugging Face
Исходники: https://github.com/huggingface/transformers
Документация: https://huggingface.co/docs/transformers/main/en/index

Платформа Hugging Face это коллекция готовых современных предварительно обученных Deep Learning моделей. А библиотека Transformers предоставляет инструменты и интерфейсы для их простой загрузки и использования. Это позволяет вам экономить время и ресурсы, необходимые для обучения моделей с нуля.

Модели решают весьма разнообразный спектр задач:

NLP: classification, NER, question answering, language modeling, summarization, translation, multiple choice, text generation.

CV: classification, object detection,segmentation.

Audio: classification, automatic speech recognition.

Multimodal: table question answering, optical character recognition, information extraction from scanned documents, video classification, visual question answering.

Reinforcement Learning

Time Series

Одна и та же задача может решаться различными архитектурами и их список впечатляет - более 150 на текущий момент. Из наиболее известных: Vision Transformer (ViT), T5, ResNet, BERT, GPT2. На этих архитектурах обучены более 60 000 моделей. GitHub - huggingface/transformers: 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
Как использовать агенты Hugging Face для решения задач NLP
Hugging Face  —  ИИ-сообщество с открытым исходным кодом для практиков машинного обучения  —  недавно интегрировало концепцию инструментов и агентов в свою популярную библиотеку Transformers. Если вы уже использовали Hugging Face для решения задач обработки естественного языка (NLP), компьютерного зрения и работой над аудио/речью, вам будет интересно узнать о дополнительных возможностях Transformers.

Агенты значительно улучшают пользовательский опыт. Допустим, вы хотите использовать модель из Hugging Face Hub для создания переводов с английского на французский. В этом случае вам пришлось бы провести исследование, чтобы найти хорошую модель, затем выяснить, как именно ее использовать, и после написать для нее код и создать перевод.

Но что, если в вашем распоряжении был бы эксперт на платформе Hugging Face, который уже освоил все это? Вы бы просто сказали эксперту, что хотите перевести предложение с английского на французский, а он позаботился бы о том, чтобы найти хорошую модель, написать код задачи и выдать результат  —  и сделал бы все гораздо быстрее, чем мы с вами.

Именно для этого и предназначены агенты. Вы описываете агенту задачу на простом английском языке, а агент изучает имеющиеся в его арсенале инструменты и выполняет вашу задачу. Это очень похоже на то, как если бы вы попросили ChatGPT перевести предложение, а ChatGPT позаботился бы обо всем остальном. Но вместо того чтобы ограничиваться несколькими моделями, которые использует ChatGPT (т. е. моделями Open AI, такими как GPT 3.5 и GPT-4), агенты имеют доступ ко многим моделям, доступным на Hugging Face.

Теперь, разобравшись с тем, как работают агенты и инструменты, посмотрим, как реализовать их возможности.
Google launches first AI-powered Android update and new Pixel 9 phones
Google
on Tuesday announced new artificial intelligence features that are coming to Android devices. The move to bring its Gemini AI assistant to supported devices shows again how Google aims to put its AI in front of consumers before Apple
, which will launch its AI on iPhones, Macs and iPads later this year.

Google doesn’t make a lot of money from its hardware business but the latest Android features could help drive new revenue through the company’s Gemini AI subscription program.

“We’ve completely rebuilt the assistant experience with Gemini, so you can speak to it naturally the way you would with another person,” said Android Ecosystem President Sameer Samat in a Tuesday blog post. “It can understand your intent, follow your train of thought and complete complex tasks.”

“Starting today, you can bring up Gemini’s overlay on top of the app you’re using to ask questions about what’s on your screen,” Samat wrote. It will be available on hundreds of phone models from dozens of device makers, according to Google.

Google previously had some AI features in Android, but this is the first year it’s heavily emphasizing new capabilities powered by a large AI language model installed on devices.

One example the company provided involved a user uploading a photo of a concert list and asking Gemini to see if their calendar is free, after which Gemini checks Google Calendar. If the user has availability in their schedule, Gemini offers to create a reminder to check ticket prices later that night.
https://www.cnbc.com/2024/08/13/google-pixel-9-phones-first-ai-powered-android-update-announced.html Google launches first AI-powered Android update and new Pixel 9 phones
Gemini makes your mobile device a powerful AI assistant
Aug 13, 2024

5 min read

Gemini Live is available today to Advanced subscribers, along with conversational overlay on Android and even more connected apps.
For years, we’ve relied on digital assistants to set timers, play music or control our smart homes. This technology has made it easier to get things done and saved valuable minutes each day.

Now with generative AI, we can provide a whole new type of help for complex tasks that can save you hours. With Gemini, we’re reimagining what it means for a personal assistant to be truly helpful. Gemini is evolving to provide AI-powered mobile assistance that will offer a new level of help — all while being more natural, conversational and intuitive.

Learn more about the new Gemini features, which will be available on both Android and iOS.

Rolling out today: Gemini Live
Gemini Live is a mobile conversational experience that lets you have free-flowing conversations with Gemini. Want to brainstorm potential jobs that are well-suited to your skillset or degree? Go Live with Gemini and ask about them. You can even interrupt mid-response to dive deeper on a particular point, or pause a conversation and come back to it later. It’s like having a sidekick in your pocket who you can chat with about new ideas or practice with for an important conversation.

Gemini Live is also available hands-free: You can keep talking with the Gemini app in the background or when your phone is locked, so you can carry on your conversation on the go, just like you might on a regular phone call. Gemini Live begins rolling out today in English to our Gemini Advanced subscribers on Android phones, and in the coming weeks will expand to iOS and more languages.

To make speaking to Gemini feel even more natural, we’re introducing 10 new voices to choose from, so you can pick the tone and style that works best for you.
https://blog.google/products/gemini/made-by-google-gemini-ai-updates/ Gemini makes your mobile device a powerful AI assistant
Context Caching with Gemini 1.5 Flash
Google recently released a new feature called context-caching which is available via the Gemini APIs through the Gemini 1.5 Pro and Gemini 1.5 Flash models. This guide provides a basic example of how to use context-caching with Gemini 1.5 Flash.


https://youtu.be/987Pd89EDPs?si=j43isgNb0uwH5AeI

The Use Case: Analyzing a Year's Worth of ML Papers
The guide demonstrates how you can use context caching to analyze the summaries of all the ML papers we've documented over the past year. We store these summaries in a text file, which can now be fed to the Gemini 1.5 Flash model and query efficiently.
https://www.promptingguide.ai/applications/context-caching
Function Calling with GPT-4
As a basic example, let's say we asked the model to check the weather in a given location.

The LLM alone would not be able to respond to this request because it has been trained on a dataset with a cutoff point. The way to solve this is to combine the LLM with an external tool. You can leverage the function calling capabilities of the model to determine an external function to call along with its arguments and then have it return a final response. Below is a simple example of how you can achieve this using the OpenAI APIs.

Let's say a user is asking the following question to the model:

What is the weather like in London?

To handle this request using function calling, the first step is to define a weather function or set of functions that you will be passing as part of the OpenAI API request:

tools = [
{
"type": "function",
"function": {
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA",
},
"unit": {
"type": "string",
"enum": ["celsius", "fahrenheit"]},
},
"required": ["location"],
},
},
}
]

The get_current_weather function returns the current weather in a given location. When you pass this function definition as part of the request, it doesn't actually executes a function, it just returns a JSON object containing the arguments needed to call the function. Here are some code snippets of how to achieve this.https://www.promptingguide.ai/applications/function_calling
Function Calling with LLMs
Getting Started with Function Calling

Function calling is the ability to reliably connect LLMs to external tools to enable effective tool usage and interaction with external APIs.

LLMs like GPT-4 and GPT-3.5 have been fine-tuned to detect when a function needs to be called and then output JSON containing arguments to call the function. The functions that are being called by function calling will act as tools in your AI application and you can define more than one in a single request.

Function calling is an important ability for building LLM-powered chatbots or agents that need to retrieve context for an LLM or interact with external tools by converting natural language into API calls.

Functional calling enables developers to create:

conversational agents that can efficiently use external tools to answer questions. For example, the query "What is the weather like in Belize?" will be converted to a function call such as get_current_weather(location: string, unit: 'celsius' | 'fahrenheit')
LLM-powered solutions for extracting and tagging data (e.g., extracting people names from a Wikipedia article)
applications that can help convert natural language to API calls or valid database queries
conversational knowledge retrieval engines that interact with a knowledge base
In this guide, we demonstrate how to prompt models like GPT-4 and open-source models to perform function calling for different use cases.https://www.promptingguide.ai/applications/function_calling
Hugging Face Space by GlidingDragonEntertainment

Let's create a simple Python app using FastAPI:

requirements.txt


fastapi
uvicorn[standard]
Hint You can also create the requirements file file directly in your browser.
app.py


from fastapi import FastAPI

app = FastAPI()

@app.get("/")
def greet_json():
return {"Hello": "World!"}
Hint You can also create the app file file directly in your browser.
Create your Dockerfile:


# Read the doc: https://huggingface.co/docs/hub/spaces-sdks-docker
# you will also find guides on how best to write your Dockerfile

FROM python:3.9

RUN useradd -m -u 1000 user
USER user
ENV PATH="/home/user/.local/bin:$PATH"

WORKDIR /app

COPY --chown=user ./requirements.txt requirements.txt
RUN pip install --no-cache-dir --upgrade -r requirements.txt

COPY --chown=user . /app
CMD ["uvicorn", "app:app", "--host", "0.0.0.0", "--port", "7860"]

https://huggingface.co/spaces/GlidingDragonEntertainment/README Docker Spaces