HeBERT: Pre-trained BERT for Polarity Analysis and Emotio... | HeBERT: Pre-trained BERT for Polarity Analysis and Emotio...
HeBERT: Pre-trained BERT for Polarity Analysis and Emotion Recognition
HeBERT is a Hebrew pre-trained language model. It is based on Google's BERT architecture and it is BERT-Base config (Devlin et al. 2018).

HeBert was trained on three datasets:

A Hebrew version of OSCAR (Ortiz, 2019): ~9.8 GB of data, including 1 billion words and over 20.8 million sentences.
A Hebrew dump of Wikipedia: ~650 MB of data, including over 63 million words and 3.8 million sentences
Emotion UGC data was collected for the purpose of this study. (described below) We evaluated the model on emotion recognition and sentiment analysis, for downstream tasks.
https://huggingface.co/avichr/heBERT_sentiment_analysis avichr/heBERT_sentiment_analysis · Hugging Face