Tencent Releases Open-Source MoE Large Language Model Hunyuan-large
On November 5th, Tencent released the open-source MoE large language model Hunyuan-large with a total of 398 billion parameters, making it the largest in the industry, with 52 billion activation parameters. Public evaluation results show that Tencent‘s Hunyuan Large model leads comprehensively in various disciplines such as CMMLU, MMLU, CEval, MATH comprehensive evaluation sets and tasks including NLP in Chinese and English languages, code processing, mathematics across nine dimensions surpassing top-tier open-source large models like Llama3.1 and Mixtral. It is understood that this model can achieve high-quality synthetic data through technical innovation by using synthetic data to enhance training effectively addressing the shortcomings of natural data. In terms of context processing capability, the pre-trained model supports text sequences up to 256K significantly enhancing its ability to handle long-context tasks.
At the same time, Tencent Hunyuan announced that to address the lack of real-world long-text evaluation sets in the industry they will soon release an open-source PenguinScrolls evaluation set to support application research within the industry. The self-developed PenguinScrolls is based on various public financial documents, legal texts academic papers among other types of natural long texts ranging from lengths between 1K-128K covering various deep reading comprehension and long-text reasoning tasks.
SEE ALSO: “Tencent Yuanbao” Launches: Based on the Hunyuan LM, Supports AI Search, Summarization, Writing