Unconstrained Film Production Using Generative Models

Speaker

Hyeonseo Lee
| Graduate School of Advanced Imaging Science, Multimedia, and Film (GSAIM)

Abstract

This paper presents an approach to film production using generative models, focusing on creating a realistic video. Commercial AI tools have common issues in generated video, such as unnatural motions and artifacts. To address this problem, we integrate Stable Video Diffusion for maintaining visual consistency and Enhanced Motion Analysis Video Frame Interpolation (EMA-VFI) for smooth motion portrayal. The experimental results demonstrate significant improvements in video quality, emphasizing the potential of AI to revolutionize modern film production.

Hyeonseo Lee is currently working toward the MS degrees at Graduate School of Advanced Imaging Science, Multimedia, and Film (GSAIM), Chung-Ang University (CAU: Seoul, South Korea), and has been member of the CPAI Lab (https://cmlab.cau.ac.kr, https://sites.google.com/view/pai-lab) since March 2024. Her research focuses on computer vision and machine learning, with a particular emphasis on re-identification and video analysis, leveraging domain generalization and generative models.

List