SenseTime Launches “SenseNova” Foundation Model Sets and AI Computing Systems
Chinese artificial intelligence company SenseTime hosted a Tech Day event on April 10, sharing their strategic plan for advancing Artificial General Intelligence (AGI) development through the combination of “foundation models + large-scale computing” systems. Under this strategy, SenseTime unveiled the “SenseNova” foundation model set, introducing a variety of foundation models and capabilities in natural language processing, content generation, automated data annotation, and custom model training.
Over the course of five years, SenseTime has built SenseCore, a leading AI infrastructure with 27,000 GPUs, capable of delivering a total computational power of 5,000 petaflops, making it one of the largest intelligent computing platforms in Asia. With the infrastructure’s capabilities, SenseTime has trained foundation models in various fields, such as computer vision, natural language processing, AI content generation, multimodality, and decision intelligence.
“SenseNova” has introduced “SenseChat”, the latest large-scale language model (LLM) developed by SenseTime. As an LLM with hundreds of billions of parameters, SenseChat is trained using a vast amount of data, considering the Chinese context to better understand and process Chinese texts. SenseChat demonstrated its capabilities in multi-turn dialogues, logical reasoning, language correction, content creation, emotional analysis, etc.
SenseTime showcased various generative AI models and applications of “SenseNova”, such as text-to-image creation, 2D/3D digital human generation, and complex scenario/detailed object generation. “SenseMirage” text-to-image creation platform showcased powerful image capabilities with realistic lighting, rich details, and diverse styles, supporting 6K ultra-high-definition image generation. Customers can also train and finetune their own generative models tailored to their own styles. “SenseAvatar” AI digital human generation platform can create natural-sounding and -moving digital human avatars with accurate lip-sync and multi-lingual proficiency using just a 5-minute real-person video clip. “SenseSpace” and “SenseThings” 3D content-generation platforms can efficiently and cost-effectively generate large-scale 3D scenes and detailed objects, providing new possibilities for metaverse and mixed reality applications.
Leveraging SenseCore infrastructure and “SenseNova” foundation models, SenseTime offers a range of Model-as-a-Service solutions to industry partners, encompassing automated data annotation, customized model training and finetuning, model inference deployment, and development efficiency enhancement.
SEE ALSO: SenseTime Releases Large Multimodal Model amid ChatGPT Boom
Xu Li, Chairman and CEO of SenseTime, said, “In the era of AGI, the three elements of data, algorithms, and computing power are undergoing a new evolution. The number of model parameters will increase exponentially, and the volume of data will grow massively with the introduction of multimodalities, leading to a continuous surge in demand for computing power. We have built the infrastructure for the AGI era with SenseCore and named our foundation model set as ‘SenseNova’, implying ‘constant renewal, daily renewal, and further renewal’. We hope to continuously update the models’ iteration speed and their problem-solving capabilities, unlocking more possibilities for AGI.”