PulseAugur
LIVE 14:29:12
research · [2 sources] ·
0
research

Relit-LiVE framework enhances video relighting without camera pose knowledge

Researchers have introduced Relit-LiVE, a new framework for video relighting that aims to produce physically consistent and temporally stable results without needing prior camera pose information. The method incorporates raw reference images into the rendering process to recover lost scene cues and jointly predicts relit videos with aligned environment maps. This approach improves geometric-illumination alignment and supports dynamic lighting and camera motion, outperforming existing video relighting and neural rendering techniques. AI

Summary written by gemini-2.5-flash-lite from 2 sources. How we write summaries →

IMPACT This framework could enable more realistic and dynamic video editing and rendering applications by improving the consistency and stability of relighting.

RANK_REASON The cluster contains a research paper detailing a new framework for video relighting.

Read on arXiv cs.CV →

COVERAGE [2]

  1. arXiv cs.CV TIER_1 · Weiqing Xiao, Hong Li, Xiuyu Yang, Houyuan Chen, Wenyi Li, Tianqi Liu, Shaocong Xu, Chongjie Ye, Hao Zhao, Beibei Wang ·

    Relit-LiVE: Relight Video by Jointly Learning Environment Video

    arXiv:2605.06658v1 Announce Type: new Abstract: Recent advances have shown that large-scale video diffusion models can be repurposed as neural renderers by first decomposing videos into intrinsic scene representations and then performing forward rendering under novel illumination…

  2. arXiv cs.CV TIER_1 · Beibei Wang ·

    Relit-LiVE: Relight Video by Jointly Learning Environment Video

    Recent advances have shown that large-scale video diffusion models can be repurposed as neural renderers by first decomposing videos into intrinsic scene representations and then performing forward rendering under novel illumination. While promising, this paradigm fundamentally r…