
Vivo Unveils BlueLM 3B, an On-Device Multimodal Model that Ranks No.1 among Sub-10B Models
Want to read in a language you're more familiar with?
Vivo launches BlueLM 3B, an on-device multimodal model that claims No.1 among sub-10B models, with 128K context and unified “One Model” capabilities.
On Oct. 10, 2025, vivo introduced BlueLM 3B, an on-device multimodal reasoning model designed to run entirely on smartphones. After a year of training and optimization, vivo says the 3-billion-parameter model unifies five core capabilities into a single “One Model” and supports 128K context length.
Vivo reports that BlueLM 3B outperformed every 8B model listed on the OpenCompass multimodal benchmark, claiming “absolute leadership under 10B.” On SuperCLUE’s mobile on-device LLM benchmark, BlueLM 3B also ranked No. 1 among models ≤10B parameters. Separately, the company says its AI achieved the L3 “Excellence” tier—currently the top level—in the China Academy of Information and Communications Technology’s Terminal Intelligent Service Capability assessment.
Alongside the on-device model, vivo announced upgrades to its image foundation model. A staged, progressive training schedule is said to improve text-image alignment and visual quality; a deeply optimized glyph-control network targets long-text rendering, enabling more precise text generation within images. Vivo is also rolling out a suite of AI photo-editing features built on the upgraded image model.