
University of Hong Kong and Kuaishou Kling Introduce MemFlow to Solve Long-Video Memory Issues in AI Generation
Want to read in a language you're more familiar with?
Researchers introduced MemFlow, a dynamic memory mechanism for AI video generation that achieves 96.60 subject consistency and real-time inference to solve long-video narrative coherence issues.
Researchers from the University of Hong Kong and Kuaishou’s Kling team have jointly proposed MemFlow, a novel approach designed to address the long-standing challenges of memory decay and narrative inconsistency in AI-generated long videos.
MemFlow introduces a dynamic, adaptive streaming long-term memory mechanism that significantly improves narrative coherence and visual consistency across extended video sequences. Traditional methods often rely on rigid memory strategies, resulting in identity drift or character confusion over time.
The solution features two core components: Narrative-Adaptive Memory (NAM), which retrieves the most relevant historical visual context based on the current prompt, and Sparse Memory Activation (SMA), which selectively activates key information to maintain computational efficiency. In benchmark tests, MemFlow achieved a VBench-Long overall quality score of 85.02 and an aesthetic score of 61.07, while maintaining stable long-range semantic consistency. Subject consistency reached 96.60, and real-time inference achieved 18.7 FPS on a single NVIDIA H100 GPU, highlighting both quality and efficiency gains.
Source : liangziwei




