MoCHA-former: Moiré-Conditioned Hybrid Adaptive Transformer for Video Demoiréing
Jeahun Sung
| GSAIM, Chung-Ang Univ, CM Lab
Recent advances in portable imaging have made screen capture common, but frequency aliasing between the camera CFA and display sub-pixels causes severe moiré artifacts. We propose MoCHA-former, a video demoiréing framework with two key modules: DMAD, which separates moiré from content to produce moiré-adaptive features, and STAD, which captures large-scale structures, models channel dependence, and ensures temporal consistency without explicit alignment. Evaluations on RAW and sRGB video datasets show that MoCHA-former achieves superior PSNR, SSIM, and LPIPS compared to prior methods.
Jeahun Sung is a graduate student at GSAIM.
Recently, He has been focusing on research in Low-Level Vision, with particular interests in Image and Video Demoiréing as well as Medical Imaging.