<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/">
    <channel>
        <title>Pandaily - China Tech News, AI &amp; Electric Vehicle Insights</title>
        <link>https://pandaily.com</link>
        <description>Latest technology news, AI breakthroughs, and electric vehicle developments from China's innovative tech landscape</description>
        <lastBuildDate>Fri, 15 May 2026 14:52:08 GMT</lastBuildDate>
        <docs>https://validator.w3.org/feed/docs/rss2.html</docs>
        <generator>Pandaily RSS Feed Generator</generator>
        <language>en</language>
        
        <copyright>© 2026 Pandaily. All rights reserved.</copyright>
        <item>
            <title><![CDATA[DeepCybo: Beijing Startup Betting on First-Person Human Data for Embodied AGI]]></title>
            <link>https://pandaily.com/deep-cybo-beijing-startup-betting-on-first-person-human-data-for-embodied-agi</link>
            <guid isPermaLink="false">https://pandaily.com/deep-cybo-beijing-startup-betting-on-first-person-human-data-for-embodied-agi</guid>
            <pubDate>Fri, 15 May 2026 10:50:01 GMT</pubDate>
            <description><![CDATA[Beijing AI startup DeepCybo, founded by Chen Kai, has raised hundreds of millions of RMB pursuing embodied AGI through first-person human video data—now validated by Tesla, NVIDIA, and Figure AI pivoting to the same approach.]]></description>
            <content:encoded><![CDATA[<img src="https://cms-image.pandaily.com/2026_05_15_184321_23c18cf03f.png" alt="DeepCybo: Beijing Startup Betting on First-Person Human Data for Embodied AGI" style="max-width: 100%; height: auto;" /><br/><br/><p>DeepCybo , a Beijing-based AI startup founded by Chen Kai, is pushing toward embodied AGI using first-person human video data—a route that even Silicon Valley giants are now betting on.</p> <p>Founded in early 2025, DeepMotor faced skepticism from domestic investors who questioned why the company was pursuing a path American companies hadn&#39;t yet proven viable. The startup&#39;s core thesis: only by training AI systems on authentic human first-person video data can true embodied intelligence be achieved.</p> <p><img src="https://cms-image.pandaily.com/2026_05_15_184402_a94dcf7558.png" alt="屏幕截图 2026-05-15 184402.png"></p> <p>The tide turned in May 2025 when Tesla announced it was shifting Optimus robot training toward human video data. GeneralistAI followed in June with a demo showing robotic imitative learning, later validating scaling laws with 270,000 hours of real human-collected data. FigureAI also announced partnerships with commercial real estate companies to collect first-person data from humans.</p> <p>In February 2026, NVIDIA released EgoScale, pre-training robots with 20,000 hours of first-person video data for dexterous manipulation. By this point, the market realized: DeepMotor had been on this trajectory a full year ahead of the global industry consensus. DeepCybo has raised hundreds of millions of RMB and is now scaling rapidly to validate its approach.</p> <p>DeepCybo has raised hundreds of millions of RMB and is now scaling rapidly to validate its approach.</p>]]></content:encoded>
            <author>contact@pandaily.com (Pandaily)</author>
            <category>Startups</category>
            <enclosure url="https://cms-image.pandaily.com/2026_05_15_184321_23c18cf03f.png" length="0" type="image/png"/>
        </item>
        <item>
            <title><![CDATA[XbotGo Raises ~RMB 100M Led by Ninebot Capital for AI Sports Cameras]]></title>
            <link>https://pandaily.com/xbotgo-ninebot-capital-ai-sports-camera</link>
            <guid isPermaLink="false">https://pandaily.com/xbotgo-ninebot-capital-ai-sports-camera</guid>
            <pubDate>Fri, 15 May 2026 10:37:07 GMT</pubDate>
            <description><![CDATA[XbotGo (深眸远智), an AI sports imaging startup founded by a Chinese team, raises ~RMB 100M led by Ninebot Capital (Segway-Ninebot) to expand its AI camera business for youth sports markets.]]></description>
            <content:encoded><![CDATA[<img src="https://cms-image.pandaily.com/2026_05_15_183358_7bd66dd934.png" alt="XbotGo Raises ~RMB 100M Led by Ninebot Capital for AI Sports Cameras" style="max-width: 100%; height: auto;" /><br/><br/><p>XbotGo (深眸远智), a Chinese AI-powered sports imaging company, has closed a new funding round of approximately RMB 100 million, led by Ninebot Capital (九号资本, Segway-Ninebot's investment arm), with participation from Yuanhua Holdings, Different Capital, and existing investor ZeroOne Ventures .</p> <p>The company has completed three product iterations and launched its first standalone AI sports camera. Founded after CEO Tan Kefeng repeatedly watched his children play sports and struggled to capture quality footage, XbotGo's products use AI to automatically track and film athletic activities, generate highlight clips, provide training data analysis, and enable multi-camera live streaming.</p> <p>The US youth sports market represents a massive opportunity—with over 30 million teenagers participating in organized sports training, and American families spending an average of 10 hours per week on children's sports activities. The core pain point: manual filming distracts parents from the experience while making it nearly impossible to capture optimal moments due to distance, small targets, and complex scenes.</p> <p>XbotGo's latest product, the Falcon AI camera, operates independently without requiring a smartphone connection.</p>]]></content:encoded>
            <author>contact@pandaily.com (Pandaily)</author>
            <category>Startups</category>
            <enclosure url="https://cms-image.pandaily.com/2026_05_15_183358_7bd66dd934.png" length="0" type="image/png"/>
        </item>
        <item>
            <title><![CDATA[SMIC: AI Boom Drives Power Management Chip Demand, Automotive BCD Platform in High Demand]]></title>
            <link>https://pandaily.com/smic-ai-power-management-chip-demand</link>
            <guid isPermaLink="false">https://pandaily.com/smic-ai-power-management-chip-demand</guid>
            <pubDate>Fri, 15 May 2026 10:28:41 GMT</pubDate>
            <description><![CDATA[SMIC (中芯国际) co-CEO Zhao Haijun reports AI boom is boosting demand for power management and data transmission chips, tightening memory supply chains and driving strong orders for the company's automotive-grade analog BCD platform.]]></description>
            <content:encoded><![CDATA[<img src="https://cms-image.pandaily.com/1/smic_logo_f5ed705019.png" alt="SMIC: AI Boom Drives Power Management Chip Demand, Automotive BCD Platform in High Demand" style="max-width: 100%; height: auto;" /><br/><br/><p>Semiconductor Manufacturing International Corporation (SMIC / 中芯国际)Co CEO Zhao Haijun stated at the company's earnings call that the AI boom is driving strong demand for power management chips and data transmission chips, while simultaneously constraining NOR Flash and other memory product supply chains.</p> <p>The company's standalone Flash process platform and analog process platforms are seeing robust demand. SMIC's automotive-grade processes cover logic, analog BCD, embedded storage, standalone Flash, display drivers, image sensors, and power devices—capabilities that have been refined over years and are now seeing meaningful volume deployment.</p> <p>In particular, SMIC's automotive analog BCD platform is in high demand with full order books, as automotive semiconductor content continues expanding with the shift toward electric and intelligent vehicles.</p>]]></content:encoded>
            <author>contact@pandaily.com (Pandaily)</author>
            <category>News</category>
            <enclosure url="https://cms-image.pandaily.com/1/smic_logo_f5ed705019.png" length="0" type="image/png"/>
        </item>
        <item>
            <title><![CDATA[MediaTek Dimensity: The Chip Platform Powering Smartphone AI Agents]]></title>
            <link>https://pandaily.com/mediatek-dimensity-smartphone-ai-agents</link>
            <guid isPermaLink="false">https://pandaily.com/mediatek-dimensity-smartphone-ai-agents</guid>
            <pubDate>Fri, 15 May 2026 10:26:16 GMT</pubDate>
            <description><![CDATA[MediaTek's latest Dimensity (天玑) developer conference positions the chip platform as key to enabling smartphone AI agents, as daily autonomous AI task volume surged 7x year-over-year to 870 million in 2026.]]></description>
            <content:encoded><![CDATA[<img src="https://cms-image.pandaily.com/1/mediatek_dimensity_7d8d21f848.jpg" alt="MediaTek Dimensity: The Chip Platform Powering Smartphone AI Agents" style="max-width: 100%; height: auto;" /><br/><br/><p>MediaTek (联发科) is positioning its Dimensity (天玑) chip platform as the foundation for smartphone AI agent experiences, as the industry pivots toward on-device AI that balances computing power, power efficiency, and system-level context awareness.</p> <p>At the latest Dimensity Developer Conference, MediaTek outlined its vision for "smart agent" AI experiences that shift devices from passive response to active, anticipatory behavior. The three core challenges for AI agents on smartphones—balancing performance and power consumption, enabling proactive sensing, and unifying fragmented application ecosystems—require tight co-optimization of silicon, firmware, and software.</p> <p>MediaTek's Dimensity chips aim to address all three through dedicated AI processing units, advanced process nodes, and the Dimensity AI engine that coordinates on-device model inference across CPU, GPU, and NPU resources.</p> <p>The broader AI agent market is seeing explosive growth: daily autonomous agent task volume grew from 120 million in 2025 to 870 million in 2026—a 7x increase year-over-year.</p> <p>Source: 量子位 / QbitAI</p>]]></content:encoded>
            <author>contact@pandaily.com (Pandaily)</author>
            <category>Gadgets</category>
            <enclosure url="https://cms-image.pandaily.com/1/mediatek_dimensity_7d8d21f848.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Tencent Q1 2026: Revenue Up 9% as AI Investment Surges]]></title>
            <link>https://pandaily.com/tencent-q1-2026-ai-investment-results</link>
            <guid isPermaLink="false">https://pandaily.com/tencent-q1-2026-ai-investment-results</guid>
            <pubDate>Fri, 15 May 2026 08:28:56 GMT</pubDate>
            <description><![CDATA[Tencent Q1 2026 earnings show 9% revenue growth to RMB 196.46 billion, with management signaling significant AI capex expansion through 2026.]]></description>
            <content:encoded><![CDATA[<img src="https://cms-image.pandaily.com/1/tencent_q1_69ed1df022.jpg" alt="Tencent Q1 2026: Revenue Up 9% as AI Investment Surges" style="max-width: 100%; height: auto;" /><br/><br/><p>Tencent reported first quarter 2026 results with total revenue of RMB 196.46 billion, up 9% year-over-year, with Non-IFRS operating profit of RMB 75.63 billion, also up 9% year-over-year.</p> <p>However, the company's heavy AI investments have impacted profitability. Excluding the impact of new AI products (Hy, Yuanbao, CodeBuddy, WorkBuddy, and QClaw), Non-IFRS operating profit would have grown 17% year-over-year instead.</p> <p>R&amp;D investment reached RMB 22.54 billion in Q1, up 19% year-over-year, while capital expenditure was RMB 31.94 billion, up 16% year-over-year. Tencent President Martin Lau stated on the earnings call that the company expects capital expenditure to continue rising significantly, especially in the second half of 2026.</p> <p>Tencent's AI strategy prioritizes integrating AI capabilities across its existing business ecosystem rather than pursuing standalone AI products. The company's latest AI offerings include Yuanbao (consumer AI assistant), CodeBuddy (AI coding tool), and WorkBuddy (enterprise AI solution).</p> <p>Source: 36kr / 智能涌现 (AIEmergence)</p>]]></content:encoded>
            <author>contact@pandaily.com (Pandaily)</author>
            <category>News</category>
            <enclosure url="https://cms-image.pandaily.com/1/tencent_q1_69ed1df022.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Former Kepler Robotics CEO Launches New Venture Sota Unlimited, Betting on 'Robot Brain' for Overseas Markets]]></title>
            <link>https://pandaily.com/former-kepler-robotics-ceo-sota-unlimited-robot-brain-overseas</link>
            <guid isPermaLink="false">https://pandaily.com/former-kepler-robotics-ceo-sota-unlimited-robot-brain-overseas</guid>
            <pubDate>Fri, 15 May 2026 07:17:12 GMT</pubDate>
            <description><![CDATA[Hu Debo, former CEO of Kepler Robotics, has launched his second embodied AI startup — Sota Unlimited — focusing on the 'robot brain' rather than full-stack robots.]]></description>
            <content:encoded><![CDATA[<img src="https://cms-image.pandaily.com/1/sota_4def6ce93d.jpg" alt="Former Kepler Robotics CEO Launches New Venture Sota Unlimited, Betting on 'Robot Brain' for Overseas Markets" style="max-width: 100%; height: auto;" /><br/><br/><p>On May 14, GeekPark learned that Hu Debo, former CEO of Kepler Robotics, has launched a second venture in the embodied AI track. The new company is called &quot;Sota Unlimited.&quot;</p> <p>This time, he chose a different path from Kepler Robotics. In 2023, when the humanoid robot track was just heating up, Hu co-founded Kepler Robotics, entering the full-stack bipedal humanoid robot,底盘双臂 (base with two arms), and industrial scenarios. Two years later, Kepler had become one of the representative domestic industrial scene humanoid robot companies, completing a billion-yuan A++ round in April this year.</p> <p>But Hu&#39;s new project did not continue betting on full-stack machines, nor did it choose industrial landing scenarios. Sota Unlimited focuses more on the embodied AI &quot;brain&quot; itself: centering on world action models, multimodal VLA, and data collection systems — attempting to solve the most difficult link when robots truly enter the physical world.</p> <p>Not &quot;seeing&quot; the world, but understanding contact, motion, space, and physics.</p> <p>Sota Unlimited will showcase complete brain capabilities this summer, including world models, multimodal VLA, and the Physica-Claw robot operating system, completing early commercial scenario full-process demonstrations in the lab.</p> <p>The company&#39;s overseas strategy is particularly noteworthy. While Chinese humanoid robot companies are predominantly focused on the domestic market, Sota Unlimited is targeting global markets from the outset — positioning itself as a &quot;robot brain&quot; supplier for international robotics companies.</p>]]></content:encoded>
            <author>contact@pandaily.com (Pandaily)</author>
            <category>News</category>
            <enclosure url="https://cms-image.pandaily.com/1/sota_4def6ce93d.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Tencent Q1 2026 Earnings: Revenue Reaches 196.5B Yuan as Pony Ma Says 'Got on the AI Boat, Found It Was Leaking']]></title>
            <link>https://pandaily.com/tencent-q1-2026-earnings-revenue-196-billion-yuan-ai-progress</link>
            <guid isPermaLink="false">https://pandaily.com/tencent-q1-2026-earnings-revenue-196-billion-yuan-ai-progress</guid>
            <pubDate>Fri, 15 May 2026 07:13:58 GMT</pubDate>
            <description><![CDATA[Tencent reported Q1 2026 revenue of 196.46B yuan, up 9% YoY. Pony Ma's candid assessment of AI progress reflects both humor and self-awareness as Tencent accelerates AI integration across its ecosystem.]]></description>
            <content:encoded><![CDATA[<img src="https://cms-image.pandaily.com/1/tencent_q1_c38ba8a928.jpg" alt="Tencent Q1 2026 Earnings: Revenue Reaches 196.5B Yuan as Pony Ma Says 'Got on the AI Boat, Found It Was Leaking'" style="max-width: 100%; height: auto;" /><br/><br/><p>On May 13, Tencent released its Q1 2026 financial results with AI accelerating across the board. Tencent&#39;s Q1 revenue was 196.46 billion yuan, up 9% year-on-year; Non-IFRS operating profit was 75.63 billion yuan, up 9% year-on-year. Excluding the impact of new AI products, Non-IFRS operating profit increased 17% to 84.4 billion yuan. Free cash flow reached 56.7 billion yuan during the period.</p> <p>But the truly noteworthy point is not these traditional financial figures. Over the past several quarters, the outside world has been debating whether Tencent is &quot;slow&quot; on AI. This earnings report offers a clear answer: it no longer stays confined to model parameters, technical routes or organizational adjustments. Instead, it systematically demonstrates for the first time how AI is deeply embedded within Tencent’s business systems and capable of reverse-driving corporate growth.</p> <p>From the explosive call volume of Hy3 preview, to WorkBuddy becoming the highest-DAU AI office agent domestically; from Tencent Cloud International business growing over 40% year-on-year, to advertising, gaming, and office collaboration all seeing AI-driven improvements — Tencent&#39;s AI deployment is beginning to generate measurable business returns.</p> <p>Pony Ma&#39;s candid remark that &quot;a year ago I got on the AI boat, only to find it was leaking&quot; reflects both humor and a realistic assessment of the challenges ahead. Tencent&#39;s AI journey is clearly still in its early stages, but the strategic direction is now validated by results.</p>]]></content:encoded>
            <author>contact@pandaily.com (Pandaily)</author>
            <category>AI</category>
            <enclosure url="https://cms-image.pandaily.com/1/tencent_q1_c38ba8a928.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Behind Honor's Robot Marathon: 7 Chinese Suppliers Powering the 'Lightning' Robot]]></title>
            <link>https://pandaily.com/honor-robot-marathon-7-chinese-suppliers</link>
            <guid isPermaLink="false">https://pandaily.com/honor-robot-marathon-7-chinese-suppliers</guid>
            <pubDate>Fri, 15 May 2026 07:12:07 GMT</pubDate>
            <description><![CDATA[Honor robots swept the top six positions at the Beijing Yizhuang half-marathon. A deep dive into the Chinese supply chain — from GigaDevice to Rainbow Technologies — powering the breakthrough.]]></description>
            <content:encoded><![CDATA[<img src="https://cms-image.pandaily.com/2026_05_15_151120_8931f8921a.png" alt="Behind Honor's Robot Marathon: 7 Chinese Suppliers Powering the 'Lightning' Robot" style="max-width: 100%; height: auto;" /><br/><br/><p>On April 19, 2026, a landmark moment unfolded at the Beijing Yizhuang Humanoid Robot Half-Marathon — the world’s first official humanoid robot marathon event.</p> <p>On the 21.0975-kilometer course, history was rewritten. Honor&#39;s &quot;Lightning&quot; robot from the Qitian Daxiansen team crossed the finish line with a net time of 50 minutes and 26 seconds, crushing last year’s championship record of 2 hours 40 minutes by nearly two-thirds, and surpassing the human men’s half-marathon world record of 57 minutes 20 seconds.</p> <p>This is undoubtedly a landmark moment for China&#39;s humanoid robotics industry. But if we only see Honor robots&#39; success, we miss the truly important story — from driver chips and 3D vision to lidar, liquid cooling systems, and precision components, supporting &quot;Lightning&#39;s&quot; breakthrough is a precise and extensive domestic supply chain network covering perception, decision-making, and execution across the entire chain.</p> <p>The seven key Chinese suppliers powering Honor&#39;s robot include:</p> <p>GigaDevice (兆易创新) — providing NOR Flash and MCU chips as the robot’s core computing and storage components.</p> <p>Lingyi Itech (领益智造) — supplying precision structural components and housings that protect internal electronics while enabling thermal management.</p> <p>Lens Technology (蓝思科技) — providing optical components essential for the robot’s visual perception systems.</p> <p>AAC Technologies (瑞声科技) — delivering high-precision sensors and acoustic components enabling environmental awareness.</p> <p>In addition, several core suppliers, including Chinese companies specializing in motor drives, power management, and precision sensors, together form a complete key subsystem for robots.</p> <p>The Honor robot&#39;s marathon achievement demonstrates that China&#39;s robotics supply chain has reached a maturity level capable of supporting complex, real-world tasks — not just in controlled lab environments but under the pressure of actual competition.</p>]]></content:encoded>
            <author>contact@pandaily.com (Pandaily)</author>
            <category>News</category>
            <enclosure url="https://cms-image.pandaily.com/2026_05_15_151120_8931f8921a.png" length="0" type="image/png"/>
        </item>
        <item>
            <title><![CDATA[I Tried Baidu's DuMate AI Agent: Once You Go Multi-Task Parallel, There's No Going Back]]></title>
            <link>https://pandaily.com/baidu-dumate-ai-agent-hands-on-multitask-parallel</link>
            <guid isPermaLink="false">https://pandaily.com/baidu-dumate-ai-agent-hands-on-multitask-parallel</guid>
            <pubDate>Fri, 15 May 2026 07:04:47 GMT</pubDate>
            <description><![CDATA[At Create 2026, Baidu launched DuMate — a general-purpose AI agent that can execute three completely different tasks simultaneously from a single voice command, redefining what AI productivity means.]]></description>
            <content:encoded><![CDATA[<img src="https://cms-image.pandaily.com/2026_05_15_150339_64faeeb3cc.png" alt="I Tried Baidu's DuMate AI Agent: Once You Go Multi-Task Parallel, There's No Going Back" style="max-width: 100%; height: auto;" /><br/><br/><p>From May 13 to 14, Create 2026 Baidu AI Developer Conference was held as scheduled. Baidu founder Robin Li for the first time proposed the AI era&#39;s &quot;yardstick&quot; — Daily Active Agents (DAA), emphasizing that measuring the prosperity of the agent ecosystem should focus on how many Agents are working for humans.</p> <p>This also draws our attention to a product: the general-purpose AI agent companion DuMate, released at the conference. It brings multi-task parallel execution capability: one sentence from the user, three completely different tasks starting simultaneously and executing in parallel.</p> <p>DuMate also launched a mobile App, achieving remote real-time connectivity with the PC end. In the conference demonstration, the user simply input an informal command, and DuMate instantly transformed into a highly coordinated invisible team, simultaneously launching customer service, operations, and marketing — three completely different jobs, stunning everyone.</p> <p>This capability is important. Over the past two years, AI has gradually integrated into many people&#39;s daily work routines. But everyone has also noticed: more AI tools don&#39;t necessarily equal more ease.</p> <p>Look at how many AI tools are installed on your phone and computer. Writing uses one tool, drawing uses another, coding uses yet another — the switching cost between tools is becoming a new form of friction. DuMate&#39;s multi-intent parallel execution represents a fundamentally different paradigm: a single command triggers a team of specialized AI agents working simultaneously, with results returned directly.</p> <p>Baidu is attempting to define the standard for AI agent productivity through DuMate, and the results are genuinely impressive.</p>]]></content:encoded>
            <author>contact@pandaily.com (Pandaily)</author>
            <category>AI</category>
            <enclosure url="https://cms-image.pandaily.com/2026_05_15_150339_64faeeb3cc.png" length="0" type="image/png"/>
        </item>
        <item>
            <title><![CDATA[ACL 2026: Alibaba DAMO Academy's I2B-LPO Breaks RLVR Homogenization — From Repetitive Sampling to Effective Exploration]]></title>
            <link>https://pandaily.com/acl-2026-alibaba-damo-i2b-lpo-rlvr-reasoning-diversity</link>
            <guid isPermaLink="false">https://pandaily.com/acl-2026-alibaba-damo-i2b-lpo-rlvr-reasoning-diversity</guid>
            <pubDate>Fri, 15 May 2026 07:01:30 GMT</pubDate>
            <description><![CDATA[Alibaba DAMO Academy's I2B-LPO framework, accepted at ACL 2026 Main, improves math reasoning accuracy by up to 5.3% and semantic diversity by 7.4% by guiding models to generate more diverse reasoning trajectories.]]></description>
            <content:encoded><![CDATA[<img src="https://cms-image.pandaily.com/2026_05_15_150024_cbf82f0041.png" alt="ACL 2026: Alibaba DAMO Academy's I2B-LPO Breaks RLVR Homogenization — From Repetitive Sampling to Effective Exploration" style="max-width: 100%; height: auto;" /><br/><br/><p>I2B-LPO is an exploration enhancement framework for RLVR post-training, which guides models to generate more diverse reasoning trajectories by improving rollout strategies, advancing exploration behavior from &quot;repetitive sampling&quot; to &quot;generating more discriminative reasoning trajectories at key nodes.&quot; On multiple math benchmarks, it simultaneously improves accuracy and semantic diversity — by up to 5.3% and 7.4% respectively.</p> <p>This work was accepted at ACL 2026 Main, from Alibaba DAMO Academy&#39;s Intelligent Decision team.</p> <p>In recent years, with the emergence of reasoning models like DeepSeek-R1, reinforcement learning with verifiable rewards (RLVR) has become an important training paradigm for improving math and coding capabilities. Its core idea: sampling multiple reasoning paths for the same problem, and according to reward signals, strengthening correct paths and suppressing incorrect ones.</p> <p>An intuitive idea is: if rollout trajectories are numerous enough, can the model always explore more solutions and obtain more effective update signals? However, in actual training, blindly increasing sampling quantity does not necessarily bring improvement.</p> <p>I2B-LPO addresses this by introducing a novel exploration strategy that guides the model toward more discriminative reasoning at decision points, rather than simply generating more of the same types of trajectories. The result is a model that not only performs better but also thinks more diversely.</p>]]></content:encoded>
            <author>contact@pandaily.com (Pandaily)</author>
            <category>AI</category>
            <enclosure url="https://cms-image.pandaily.com/2026_05_15_150024_cbf82f0041.png" length="0" type="image/png"/>
        </item>
        <item>
            <title><![CDATA[SenseTime Releases SenseNova U1: Native Unified Architecture Marks End of 'Stitching' Era]]></title>
            <link>https://pandaily.com/sentime-sensenova-u1-native-unified-architecture</link>
            <guid isPermaLink="false">https://pandaily.com/sentime-sensenova-u1-native-unified-architecture</guid>
            <pubDate>Fri, 15 May 2026 06:58:47 GMT</pubDate>
            <description><![CDATA[SenseTime's open-source SenseNova U1 model represents a paradigm shift in multimodal architecture — unifying understanding and generation in a single representation space for the first time.]]></description>
            <content:encoded><![CDATA[<img src="https://cms-image.pandaily.com/028c212d2c354ac18cbb4aaacc5396a8_fd55fb3766.webp" alt="SenseTime Releases SenseNova U1: Native Unified Architecture Marks End of 'Stitching' Era" style="max-width: 100%; height: auto;" /><br/><br/><p>When the AI industry&#39;s attention is focused on Agent, tool-calling, and long-horizon tasks at the application layer, the underlying multimodal architecture is undergoing a quieter and more thorough paradigm shift — one that answers a seemingly simple question: Should understanding and generation naturally be the same thing?</p> <p>For a long time, multimodal systems were largely assembled: perception and understanding each bore part of the capability, connected by stitching them together. The problems were obvious: understanding relies on pre-trained visual encoders (VE), while generation depends on variational autoencoders (VAE). The two systems have different learning objectives and distinct representation spaces. Information constantly shuttles between modules, inevitably suffering loss and distortion.</p> <p>This is not merely engineering awkwardness — it is a structural limitation that prevents truly native multimodal intelligence from forming.</p> <p>A recent wave of work has released entirely new signals, abandoning the approach of &quot;assembling a better system&quot; and instead working from the ground up, placing images, text, video, and even motion into the same representation space for learning and alignment. SenseTime Technology&#39;s open-source next-generation model &quot;Rìrìxīn SenseNova U1&quot; is a concentrated practice in this direction.</p> <p><img src="https://cms-image.pandaily.com/thumbnail_sentime_39f893325b_49f5111678.jpg" alt="thumbnail_sentime_39f893325b.jpg"></p> <p>Last month, SenseTime open-sourced SenseNova U1, a new generation of multimodal large model. Its core innovation lies in the Native Unified Architecture — both understanding and generation share the same visual encoder and tokenizer, breaking down the traditional &quot;stitched&quot; architecture.</p> <p>By unifying the representation space, SenseNova U1 achieves significant improvements across multiple benchmarks. The model demonstrates stronger visual reasoning capabilities and more natural multimodal generation compared to stitched architectures.</p> <p>This shift from &quot;stitching systems together&quot; to &quot;unified native architecture&quot; represents a fundamental rethinking of multimodal AI development. As more researchers pursue similar approaches, the era of &quot;patchwork&quot; multimodal systems may be coming to an end.</p>]]></content:encoded>
            <author>contact@pandaily.com (Pandaily)</author>
            <category>AI</category>
            <enclosure url="https://cms-image.pandaily.com/028c212d2c354ac18cbb4aaacc5396a8_fd55fb3766.webp" length="0" type="image/webp"/>
        </item>
        <item>
            <title><![CDATA[DJI Unveils ROMO 2 Robot Vacuum: Incremental Upgrades, but Early Adopters May Have the Last Laugh]]></title>
            <link>https://pandaily.com/dji-unveils-romo-2-robot-vacuum</link>
            <guid isPermaLink="false">https://pandaily.com/dji-unveils-romo-2-robot-vacuum</guid>
            <pubDate>Thu, 14 May 2026 08:20:12 GMT</pubDate>
            <description><![CDATA[DJI launches second-generation robot vacuum with incremental upgrades over its predecessor, maintaining mid-to-high market positioning.]]></description>
            <content:encoded><![CDATA[<img src="https://cms-image.pandaily.com/1e078f15034ce466b0311f08d07a7727_5d9fdd8a23.jpg" alt="DJI Unveils ROMO 2 Robot Vacuum: Incremental Upgrades, but Early Adopters May Have the Last Laugh" style="max-width: 100%; height: auto;" /><br/><br/><p>DJI has launched its second-generation robot vacuum, the DJI ROMO 2 series. While many may still associate DJI with its highly publicized &quot;transparent explorer edition,&quot; the truth is that the entire ROMO 1 series performed remarkably well in the market, ranking among the top players in the robot vacuum industry.</p> <p>According to data from RUNTO (洛图科技), DJI&#39;s robot vacuum sales are among the best in the industry. More importantly, DJI has not been competing on low prices — its products are positioned in the mid-to-high end of the market.</p> <p>The ROMO 2 brings incremental improvements over its predecessor. While the upgrades may not be dramatic, DJI continues to refine its cleaning technology, navigation system, and overall user experience.</p> <p>For first-generation ROMO users, the arrival of the ROMO 2 may be a double-edged sword: while it validates DJI&#39;s commitment to the robot vacuum category, early adopters who paid a premium for the original ROMO may feel their purchase has been superseded sooner than expected.</p> <p><img src="https://cms-image.pandaily.com/d75e3b45c6805ace9bc5550e8b412450_f36ce1e8af.jpg" alt="d75e3b45c6805ace9bc5550e8b412450.jpg"></p> <p>DJI, the world&#39;s leading drone manufacturer, has been steadily expanding its consumer electronics portfolio beyond aerial platforms, applying its expertise in precision robotics and computer vision to the home cleaning category.</p>]]></content:encoded>
            <author>contact@pandaily.com (Pandaily)</author>
            <category>AI</category>
            <enclosure url="https://cms-image.pandaily.com/1e078f15034ce466b0311f08d07a7727_5d9fdd8a23.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Embodied AI Startup Zenbot Raises Near-100 Million Yuan in Angel Round]]></title>
            <link>https://pandaily.com/zenbot-raises-near-100-million-yuan-angel-round</link>
            <guid isPermaLink="false">https://pandaily.com/zenbot-raises-near-100-million-yuan-angel-round</guid>
            <pubDate>Thu, 14 May 2026 08:16:57 GMT</pubDate>
            <description><![CDATA[Embodied AI startup Zenbot raises near-100M yuan angel round backed by ChangYing Precision, Kedali, and other precision manufacturing leaders.]]></description>
            <content:encoded><![CDATA[<img src="https://cms-image.pandaily.com/250e1ed731581c3a836c5a24b34fb112_e391ff464f.jpg" alt="Embodied AI Startup Zenbot Raises Near-100 Million Yuan in Angel Round" style="max-width: 100%; height: auto;" /><br/><br/><p>Zenbot, an embodied artificial intelligence infrastructure company, has completed a near-100 million yuan (approximately $14 million) angel funding round. The round was co-invested by ChangYing Precision (300115.SZ), Kedali (002850.SZ), Zhaoming Technology (301000.SZ) — all leading precision manufacturing players in their respective niches — along with L2F Light Source entrepreneur fund and Sirius Capital. Light Source Capital served as the exclusive financial advisor.</p> <p>The funding will primarily be used for developing a general-purpose embodied AI world model (World Model), mass production of core joint modules powered by third-generation semiconductor GaN drivers, promotion and customer delivery of brain-spine integrated real-time communication architecture solutions, and strengthening full-stack system design capabilities for mass production of complete products.</p> <p>Zenbot&#39;s co-founder Dr. Jia Zhenzhong holds a bachelor&#39;s and master&#39;s degree from Tsinghua University&#39;s Department of Precision Instruments and a Ph.D. from the University of Michigan. The company&#39;s focus on embodied AI positions it at the intersection of robotics, computer vision, and large-scale AI model development.</p>]]></content:encoded>
            <author>contact@pandaily.com (Pandaily)</author>
            <category>News</category>
            <enclosure url="https://cms-image.pandaily.com/250e1ed731581c3a836c5a24b34fb112_e391ff464f.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Founder Motor Achieves Small-Batch Production of Robot Joint Motors]]></title>
            <link>https://pandaily.com/founder-motor-achieves-small-batch-robot-joint-motor-production</link>
            <guid isPermaLink="false">https://pandaily.com/founder-motor-achieves-small-batch-robot-joint-motor-production</guid>
            <pubDate>Thu, 14 May 2026 08:15:06 GMT</pubDate>
            <description><![CDATA[Founder Motor (002196.SZ) achieves small-batch production of robot joint motors as it expands capacity to meet growing demand.]]></description>
            <content:encoded><![CDATA[<img src="https://cms-image.pandaily.com/d9eddb0e0cd31e7139ba5c5546770910_0bc3475410.jpg" alt="Founder Motor Achieves Small-Batch Production of Robot Joint Motors" style="max-width: 100%; height: auto;" /><br/><br/><p>Founder Motor (002196.SZ) released its investor relations activity record on May 14. Regarding humanoid robotics business, the company stated that its robot joint motors have achieved small-batch production for some products, with more products currently in close collaboration with customers for development and prototype manufacturing. Mass production timelines will depend on customer demand.</p> <p>In terms of production capacity, the company&#39;s drive motor capacity includes approximately 1 million units/year at the Lishui base, with the Deqing base planned for 3 million units/year. Currently, it has formed a capacity of about 1.5 million units/year. Existing capacity is aligned with customer demand and order growth.</p> <p>Regarding the intelligent controller business, the company noted that due to market competition in the downstream industry, it has experienced revenue growth without profit increase over the past two years. The company is actively seeking transformation in product and customer structure, with new projects and customer acquisition underway. Individual customer projects have already begun small-batch production.</p> <p>Founder Motor, listed on the Shenzhen Stock Exchange (002196.SZ), specializes in motor manufacturing for robotics, automotive, and industrial applications.</p>]]></content:encoded>
            <author>contact@pandaily.com (Pandaily)</author>
            <category>News</category>
            <enclosure url="https://cms-image.pandaily.com/d9eddb0e0cd31e7139ba5c5546770910_0bc3475410.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[ KOKONI Unveils VGGT Series: Breakthroughs in 3D Perception for Dynamic High-Fidelity Reconstruction]]></title>
            <link>https://pandaily.com/kokoni-unveils-vggt-series-breakthroughs-in-3-d-perception-for-dynamic-high-fidelity-reconstruction</link>
            <guid isPermaLink="false">https://pandaily.com/kokoni-unveils-vggt-series-breakthroughs-in-3-d-perception-for-dynamic-high-fidelity-reconstruction</guid>
            <pubDate>Thu, 14 May 2026 08:13:14 GMT</pubDate>
            <description><![CDATA[ KOKONI and Tongji University researchers unveil VGGT series breakthroughs enabling dynamic high-fidelity 3D reconstruction for world models.]]></description>
            <content:encoded><![CDATA[<img src="https://cms-image.pandaily.com/OIP_C_1_32857a2639.webp" alt=" KOKONI Unveils VGGT Series: Breakthroughs in 3D Perception for Dynamic High-Fidelity Reconstruction" style="max-width: 100%; height: auto;" /><br/><br/><p>Machine Intelligence reports: In the pursuit of Artificial General Intelligence (AGI), world models are seen as key to enabling machines to understand physical laws and achieve spatial intelligence. Efficient, robust, and precise 3D perception capability is widely regarded as the primary prerequisite for world models.</p> <p>Generally, a mature world model needs three core capabilities: continuous memory of long spatio-temporal sequences, causal decoupling of complex dynamics, and fine-grained perception of high-definition physical details.</p> <p>Recently, KOKONI (魔芯科技), together with multiple research teams including Professor Zhu Lanyun&#39;s team from Tongji University, has released four consecutive breakthroughs based on the Visual Geometry Transformer (VGGT) architecture. This series of work systematically addresses bottlenecks in 3D perception for streaming processing, dynamic robustness, and fine-grained perception, achieving a leap from basic image reconstruction to high-fidelity 4D world models.</p> <p>The three core constraints in 3D perception: long sequences, strong dynamics, and high precision represent systematic bottlenecks in real industrial scenarios. When input resolution increases, scenes introduce dynamic changes, and data formats expand to video streams, traditional architectures face significant challenges in computational power, algorithms, and system design.</p> <p> KOKONI &#39;s VGGT series results demonstrate how visual geometry transformers can overcome these challenges, enabling real-time dynamic reconstruction with unprecedented fidelity.</p>]]></content:encoded>
            <author>contact@pandaily.com (Pandaily)</author>
            <category>AI</category>
            <enclosure url="https://cms-image.pandaily.com/OIP_C_1_32857a2639.webp" length="0" type="image/webp"/>
        </item>
        <item>
            <title><![CDATA[Zephyr Intelligence Files for Hong Kong IPO, Backed by New Energy Battery Thermal Safety Solutions]]></title>
            <link>https://pandaily.com/zephyr-intelligence-files-hong-kong-ipo-battery-thermal-safety</link>
            <guid isPermaLink="false">https://pandaily.com/zephyr-intelligence-files-hong-kong-ipo-battery-thermal-safety</guid>
            <pubDate>Thu, 14 May 2026 08:06:41 GMT</pubDate>
            <description><![CDATA[Zephyr Intelligence, a new energy battery thermal safety specialist with 35.3% market share in heavy truck thermal management, files for Hong Kong IPO.]]></description>
            <content:encoded><![CDATA[<img src="https://cms-image.pandaily.com/8e67734fa5ac22b27bd71b48d656dc9a_75d6dd3e3b.jpg" alt="Zephyr Intelligence Files for Hong Kong IPO, Backed by New Energy Battery Thermal Safety Solutions" style="max-width: 100%; height: auto;" /><br/><br/><p>According to a Sing Tao report by Qi Xin from Beijing: On May 4, Zephyr Intelligence Systems (Shanghai) Co., Ltd. (哲弗智能系统) submitted a listing application to the Hong Kong Stock Exchange, with South China Financing as its sole sponsor.</p> <p>Founded in 2015, Zephyr Intelligence specializes in new energy battery thermal safety solutions. According to a Frost &amp; Sullivan report, the company held approximately 35.3% market share in the L1 (BITS) thermal management system market for new energy heavy trucks by revenue in 2025, ranking among the market leaders.</p> <p>The company&#39;s thermal safety technology addresses critical challenges in electric vehicle and new energy commercial vehicle battery systems, where thermal runaway represents a significant safety concern. Zephyr Intelligence&#39;s Battery Intelligent Thermal Safety (BITS) system has been deployed across multiple new energy vehicle categories.</p> <p>The IPO filing marks a significant milestone for the company as it seeks to expand its manufacturing capacity and accelerate market penetration in the rapidly growing new energy vehicle thermal management sector.</p>]]></content:encoded>
            <author>contact@pandaily.com (Pandaily)</author>
            <category>News</category>
            <enclosure url="https://cms-image.pandaily.com/8e67734fa5ac22b27bd71b48d656dc9a_75d6dd3e3b.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Shanghai JetLab Power Technology: Plasma Thrusters Bring Nuclear Fusion Tech to Space]]></title>
            <link>https://pandaily.com/shanghai-jet-lab-power-technology-plasma-thrusters-bring-nuclear-fusion-tech-to-space</link>
            <guid isPermaLink="false">https://pandaily.com/shanghai-jet-lab-power-technology-plasma-thrusters-bring-nuclear-fusion-tech-to-space</guid>
            <pubDate>Thu, 14 May 2026 04:11:32 GMT</pubDate>
            <description><![CDATA[Shanghai JetLab Power Technology is adapting plasma thruster technology originally developed for nuclear fusion research to enable 'space refueling' for satellites in ultra-low Earth orbit — turning decades of fusion R&D into a commercial space propulsion breakthrough.]]></description>
            <content:encoded><![CDATA[<img src="https://cms-image.pandaily.com/20260514120622_1414_13_e66bc7a201.jpg" alt="Shanghai JetLab Power Technology: Plasma Thrusters Bring Nuclear Fusion Tech to Space" style="max-width: 100%; height: auto;" /><br/><br/><p>While the dream of fusion-powered electricity generation remains years away, the technologies developed along that path are already finding practical applications. Shanghai JetLab Power Technology Co., Ltd., a startup based in Shanghai&#39;s Lingang district, is applying plasma thruster capabilities — originally used to confine and accelerate charged particles in fusion devices — to satellite propulsion in ultra-low Earth orbit (200–300 km altitude).</p> <p>In fusion reactors, scientists must generate, heat, confine, and diagnose high-velocity charged particles. JetLab Power is redirecting these already-proven capabilities toward space propulsion, developing plasma thrusters that could provide &quot;space refueling&quot; for satellites by ionizing ambient particles and ejecting them to generate thrust — eliminating the need for satellites to carry all their own propellant. The company was incubated by Dongsheng Fusion (Shanghai) Technology Co., and surprisingly, JetLab Power was founded in April 2025, predating Dongsheng Fusion&#39;s founding in July 2025. &quot;For fusion energy, this technology route is not unfamiliar,&quot; said Yang Yang, chief researcher at Dongsheng Fusion. &quot;From the start, we established the &#39;laying eggs along the way&#39; strategy.&quot;</p> <p>The broader &quot;fusion spillover&quot; ecosystem is already maturing across the Yangtze River Delta region. Technologies developed for the EAST (Experimental Advanced Superconducting Tokamak) in Hefei have given rise to contact-free security scanning systems deployed in metro stations, airports, customs, and tobacco factories. A domestically manufactured superconducting proton therapy system has passed national registration testing with over 95% localization — set to break foreign monopolies in cancer treatment equipment.</p> <p>JetLab Power&#39;s plasma thruster remains in earlier development stages, but the company is positioning itself at the frontier of a new commercial space infrastructure category: propellant-less satellite operations enabled by in-orbit resource harvesting.</p>]]></content:encoded>
            <author>contact@pandaily.com (Pandaily)</author>
            <category>Industry</category>
            <enclosure url="https://cms-image.pandaily.com/20260514120622_1414_13_e66bc7a201.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[SenseTime Launches Embodied AI Convenience Store with Humanoid Robot Clerks in Shanghai]]></title>
            <link>https://pandaily.com/sensetime-embodied-ai-convenience-store-humanoid-robot-shanghai</link>
            <guid isPermaLink="false">https://pandaily.com/sensetime-embodied-ai-convenience-store-humanoid-robot-shanghai</guid>
            <pubDate>Thu, 14 May 2026 04:05:07 GMT</pubDate>
            <description><![CDATA[SenseTime has deployed a chain of embodied AI convenience stores in Shanghai, where humanoid robots handle up to 400 orders per day — making coffee, dispensing ice cream, and greeting customers autonomously, marking a significant step toward full retail task automation.]]></description>
            <content:encoded><![CDATA[<img src="https://cms-image.pandaily.com/2026_05_14_120201_d8778b41a1.png" alt="SenseTime Launches Embodied AI Convenience Store with Humanoid Robot Clerks in Shanghai" style="max-width: 100%; height: auto;" /><br/><br/><p>A new type of convenience store staffed entirely by humanoid robots has opened in Shanghai, marking one of the most advanced commercial deployments of embodied AI in China&#39;s retail sector.</p> <p>Customers scan a QR code to place orders via their phones. The robot clerk then autonomously receives the order, locates and retrieves the corresponding product, and hands it directly to the customer — drawing curious crowds of passersby. Beyond order fulfillment, the robots are equipped with capabilities for product selection, pricing, and inventory replenishment data analysis, achieving comprehensive coverage of retail task types through embodied intelligence.</p> <p>During this year&#39;s May Day holiday, a small store at the Baoshan Riverside Scenic Area in Shanghai drew queues of customers. Scanning, ordering, and delivery — all handled by the robot clerk.</p> <p>The store, branded as &quot;Shaomai Gou&quot; (烧卖购), represents SenseTime&#39;s latest embodied AI retail physical deployment. SenseTime, one of China&#39;s leading AI companies, has been rapidly expanding its embodied intelligence portfolio, which combines robotic vision, natural language understanding, and physical manipulation capabilities.</p> <p><img src="https://cms-image.pandaily.com/2026_05_14_120333_8f1c7bdb02.png" alt="屏幕截图 2026-05-14 120333.png"></p> <p>This deployment signals a concrete commercial pathway for embodied AI beyond factory floors — bringing robot workers into high-frequency consumer-facing service roles in China&#39;s sprawling urban retail landscape.</p>]]></content:encoded>
            <author>contact@pandaily.com (Pandaily)</author>
            <category>News</category>
            <enclosure url="https://cms-image.pandaily.com/2026_05_14_120201_d8778b41a1.png" length="0" type="image/png"/>
        </item>
        <item>
            <title><![CDATA[China's 'Jiuzhang 4' Quantum Computer Achieves 10^54 Speedup Over Supercomputers]]></title>
            <link>https://pandaily.com/china-jiuzhang-4-quantum-computer-achieves-10-power-54-speedup</link>
            <guid isPermaLink="false">https://pandaily.com/china-jiuzhang-4-quantum-computer-achieves-10-power-54-speedup</guid>
            <pubDate>Thu, 14 May 2026 03:59:54 GMT</pubDate>
            <description><![CDATA[Chinese researchers from the University of Science and Technology of China have built 'Jiuzhang 4,' a programmable photonic quantum computing prototype with 8,176 modes and the ability to manipulate 3,050 photons — achieving a quantum advantage 10^54 times faster than the world's fastest supercomputer.]]></description>
            <content:encoded><![CDATA[<img src="https://cms-image.pandaily.com/1/img_jiuzhang_e9036a2c29.jpg" alt="China's 'Jiuzhang 4' Quantum Computer Achieves 10^54 Speedup Over Supercomputers" style="max-width: 100%; height: auto;" /><br/><br/><p>Researchers from the University of Science and Technology of China (USTC), led by Pan Jianwei, Lu Chaoyang, Zhang Qiang, and Liu Naile, have successfully developed &quot;Jiuzhang 4&quot; — a programmable quantum computing prototype with 1,024 squeezed vacuum inputs across 8,176 optical modes, capable of manipulating and detecting up to 3,050 photons simultaneously.</p> <p>Announced on May 13, the breakthrough was published in the international academic journal Nature. Jiuzhang 4 is designed to efficiently solve Gaussian boson sampling (GBS) tasks, achieving a computational speedup of 10^54 times compared to the world&#39;s fastest supercomputer, El Capitan — establishing what the team calls &quot;the world&#39;s strongest quantum computational advantage.&quot;</p> <p>To address the photon loss problem that has long constrained scalable quantum processors, the research team developed highly efficient optical parametric oscillator light sources and a spatiotemporal mixed-encoding interferometer. By integrating 1,024 high-efficiency squeezed states into an 8,176-mode circuit with spatiotemporal mixed encoding, the system achieved 92% source efficiency and 51% system total efficiency, enabling sampling within a Hilbert space of dimension 10^2461.</p> <p>Benchmark tests show that Jiuzhang 4 generates a single sample in just 25 microseconds, while El Capitan and the best classical algorithms would require more than 10^42 years to produce the same output — a quantum advantage ratio of approximately 10^54.</p> <p>The Jiuzhang series of specialized quantum computing prototypes has been central to demonstrating quantum computational advantage. The Jiuzhang 4 result represents a major leap in the scale and complexity of low-loss photonic quantum processors, further consolidating China&#39;s leading position in optical quantum computing.</p>]]></content:encoded>
            <author>contact@pandaily.com (Pandaily)</author>
            <category>AI</category>
            <enclosure url="https://cms-image.pandaily.com/1/img_jiuzhang_e9036a2c29.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Baidu Cloud Upgrades to Full-Stack AI Cloud for Agent Era at Create 2026 Conference]]></title>
            <link>https://pandaily.com/baidu-cloud-full-stack-ai-agent-create-2026</link>
            <guid isPermaLink="false">https://pandaily.com/baidu-cloud-full-stack-ai-agent-create-2026</guid>
            <pubDate>Thu, 14 May 2026 03:59:27 GMT</pubDate>
            <description><![CDATA[At Create 2026, Baidu announced a comprehensive upgrade of Baidu Cloud to an agent-centric full-stack AI cloud, featuring the Token Factory for agent-first inference, KV Cache hit rates exceeding 90%, and the Kunlun P800 completing multiple 10,000-GPU clusters at 97% training efficiency.]]></description>
            <content:encoded><![CDATA[<img src="https://cms-image.pandaily.com/2026_05_14_115849_29e0ebfd0b.png" alt="Baidu Cloud Upgrades to Full-Stack AI Cloud for Agent Era at Create 2026 Conference" style="max-width: 100%; height: auto;" /><br/><br/><p>At the Create 2026 Baidu AI Developer Conference on May 13, Baidu&#39;s Executive Vice President Shen Dou announced a comprehensive upgrade of Baidu Cloud to a full-stack AI cloud platform purpose-built for large-scale agent applications, delivering the best intelligence per token and the best performance per watt in AI infrastructure.</p> <p>On the Agent Infra side, the original &quot;MaaS Model Service&quot; has been upgraded to &quot;Token Factory,&quot; rebuilt with an Agent-first product architecture that minimizes token recalculation, achieving approximately 25% faster inference generation than market benchmarks. It supports major domestic models including Ernie, DeepSeek, GLM, and MiniMax.</p> <p>Baidu Cloud also launched &quot;Harness Engineering&quot; covering long-context management, persistent memory, tool calling, sub-agent scheduling, and Runtime capabilities. In browser and Office task scenarios, task success rates reach 95%, with 23% fewer token consumption compared to OpenAI&#39;s offerings.</p> <p>On the AI Infra side, through a layered pooling architecture for GPU memory, DRAM, and SSD, KV Cache hit rates exceed 90%. Long-chain agent inference performance is 3x better than mainstream open-source engines. A unified multimodal training framework delivers 2x training efficiency versus community standards.</p> <p>On the hardware side, the Kunlun P800 has completed scale validation, delivering multiple 10,000-GPU clusters with 97% effective training rate and 85%+ linear scaling. The Kunlun-based Tianchi 256-card supernode launches in June with 25% higher throughput, completing adaptation of Ernie 5.1, DeepSeek, GLM, and MiniMax models with 50% inference efficiency improvement.</p> <p>Baidu also unveiled GW-scale AIDC upgrades, reducing data center construction cycles by approximately 30% through wind-liquid cooling architecture.</p>]]></content:encoded>
            <author>contact@pandaily.com (Pandaily)</author>
            <category>AI</category>
            <category>Industry</category>
            <enclosure url="https://cms-image.pandaily.com/2026_05_14_115849_29e0ebfd0b.png" length="0" type="image/png"/>
        </item>
    </channel>
</rss>