MiniCPM-V and OmniLMM are a family of open-source large m... | MiniCPM-V and OmniLMM are a family of open-source large m...
MiniCPM-V and OmniLMM are a family of open-source large multimodal models (LMMs) adept at vision & language modeling. The models process images and text inputs and deliver high-quality text outputs. We release two featured versions that are targeted at strong performance and efficient deployment:

MiniCPM-V 2.8B: State-of-the-art end-side large multimodal models. Our latest MiniCPM-V 2.0 can accept 1.8 million pixels (e.g., 1344x1344) images at any aspect ratio, and is adept at OCR capability. It achieves comparable performance with Gemini Pro in understanding scene-text and matches GPT-4V in preventing hallucinations.

OmniLMM 12B: The most capable version with leading performance among comparable-sized models on multiple benchmarks. The model also achieves state-of-the-art performance in trustworthy behaviors, with even less hallucination than GPT-4V.https://github.com/OpenBMB/MiniCPM-V/blob/8a1f766b85595a8095651eed9a44a83a965b305b/README_en.md#minicpm-v- MiniCPM-V/README_en.md at 8a1f766b85595a8095651eed9a44a83a965b305b · OpenBMB/MiniCPM-V