A Brief History of HuggingFaceFounded in 2016, HuggingFac... | A Brief History of HuggingFaceFounded in 2016, HuggingFac...
A Brief History of HuggingFace
Founded in 2016, HuggingFace (named after the popular emoji 🤗) started as a chatbot company and later transformed into an open-source provider of NLP technologies. The chatbot company at the time, aimed at the teenage demographic, was focused on:
(...) building an AI so that you’re having fun talking with it. When you’re chatting with it, you’re going to laugh and smile — it’s going to be entertaining
- Clem Delangue, CEO & Co-founder
Like Tamagotchi, the chatbot could talk coherently about a wide range of topics, detect emotions in text, and adapt its tone accordingly.
Underlying this chatbot, however, were HuggingFace's main strengths: in-house NLP models (one such one was called Hierarchical Multi-Task Learning (HMTL)) and a managed library of pre-trained NLP models. This would serve as the early backbone of the transformers we know of currently.
The early PyTorch transformers established compatibility between PyTorch and TensorFlow 2.0, which then enabled users to move easily from one framework to another during the life of a model. Coupled with the release of the “Attention Is All You Need” paper by Google.
The shift to transformers in the NLP space, HuggingFace, who had already released parts of the powerful library powering their chatbot as an open-source project on GitHub, began to focus on open-sourcing popular large language models to PyTorch such as BERT and GPT.
With the most recent Series C funding round leading to $2 billion in evaluation, HuggingFace currently offers an ecosystem of models and datasets spread across its various tools like HuggingFace Hub, transformers, diffusers, and more.