Despite recent breakthroughs, generative video models continue to struggle to portray motion accurately. Many existing models rely heavily on pixel-level reconstruction, which frequently results in motion coherence discrepancies.
These flaws emerge as inaccurate physics, missing frames, or distortions in intricate motion sequences. Models may struggle to simulate rotating motions or dynamic behaviors such as gymnastics and object interactions. Addressing these difficulties is critical for increasing the realism of AI-generated movies, especially as their uses grow into creative and professional settings.
Meta AI Unveils VideoJAM: Innovative AI Framework to Boost Motion Coherence in Generated Videos
Meta AI introduces VideoJAM, a system aiming to improve motion representation in video generating models. By fostering combined appearance-motion representation, VideoJAM increases the consistency of produced motion.
Unlike traditional techniques, which treat motion as a secondary element, VideoJAM incorporates it directly into the training and inference procedures. This approach may be easily integrated into existing models, providing an effective technique to improve motion quality without affecting training data.
Technical Approach and Benefits
VideoJAM consists of two primary components:
- Training Phase: An input video (x1) and its corresponding motion representation (d1) are both subjected to noise and embedded into a single joint latent representation using a linear layer (Win+). A diffusion model then processes this representation, and two linear projection layers predict both appearance and motion components from it (Wout+). This structured approach helps balance appearance fidelity with motion coherence, mitigating the common trade-off found in previous models.
- Inference Phase (Inner-Guidance Mechanism): During inference, VideoJAM introduces Inner-Guidance, where the model utilizes its own evolving motion predictions to guide video generation. Unlike conventional techniques that rely on fixed external signals, Inner-Guidance allows the model to adjust its motion representation dynamically, leading to smoother and more natural transitions between frames.
Insights
Evaluations of VideoJAM indicate notable improvements in motion coherence across different types of videos. Key findings include:
- Enhanced Motion Representation: Compared to established models like Sora and Kling, VideoJAM reduces artifacts such as frame distortions and unnatural object deformations.
- Improved Motion Fidelity: VideoJAM consistently achieves higher motion coherence scores in both automated assessments and human evaluations.
- Versatility Across Models: The framework integrates effectively with various pre-trained video models, demonstrating its adaptability without requiring extensive retraining.
- Efficient Implementation: VideoJAM enhances video quality using only two additional linear layers, making it a lightweight and practical solution.
Conclusion
VideoJAM provides a structured approach to improving motion coherence in AI-generated videos by integrating motion as a key component rather than an afterthought. By leveraging a joint appearance-motion representation and Inner-Guidance mechanism, the framework enables models to generate videos with greater temporal consistency and realism. With minimal architectural modifications required, VideoJAM offers a practical means to refine motion quality in generative video models, making them more reliable for a range of applications.