PulseAugur
LIVE 10:08:21
tool · [1 source] ·
1
tool

OmniHumanoid enables cross-embodiment video generation with unpaired data

Researchers have introduced OmniHumanoid, a new framework for generating videos of humanoids performing actions across different embodiments. This system separates transferable motion learning from embodiment-specific adaptation, allowing it to learn from paired videos across multiple embodiments and then adapt to new ones using unpaired data via lightweight adapters. OmniHumanoid employs a branch-isolated attention design to prevent interference between motion conditioning and embodiment modulation, demonstrating strong performance in motion fidelity and embodiment consistency on both synthetic and real-world benchmarks. AI

Summary written by gemini-2.5-flash-lite from 1 source. How we write summaries →

IMPACT Enables more scalable data generation for embodied intelligence by facilitating motion transfer across diverse humanoid embodiments.

RANK_REASON Academic paper release on a novel framework for video generation. [lever_c_demoted from research: ic=1 ai=1.0]

Read on arXiv cs.CV →

COVERAGE [1]

  1. arXiv cs.CV TIER_1 · Mike Zheng Shou ·

    OmniHumanoid: Streaming Cross-Embodiment Video Generation with Paired-Free Adaptation

    Cross-embodiment video generation aims to transfer motions across different humanoid embodiments, such as human-to-robot and robot-to-robot, enabling scalable data generation for embodied intelligence. A major challenge in this setting is that motion dynamics are partly transfera…