PulseAugur
LIVE 13:47:07
research · [2 sources] ·
0
research

PersonaGesture personalizes co-speech gestures for unseen speakers

Researchers have developed PersonaGesture, a new diffusion-based system designed to personalize co-speech gestures for unseen speakers. The system takes target speech and a single motion clip from a new individual to generate gestures that match the utterance while preserving the speaker's unique style. PersonaGesture utilizes Adaptive Style Infusion and Implicit Distribution Rectification to effectively separate speaker identity from utterance-specific motion, improving personalization compared to existing methods. AI

Summary written by gemini-2.5-flash-lite from 2 sources. How we write summaries →

IMPACT Enhances the realism and personalization of virtual avatars and agents by enabling more natural co-speech gesture synthesis.

RANK_REASON This is a research paper detailing a new AI model and methodology.

Read on arXiv cs.CV →

COVERAGE [2]

  1. arXiv cs.CV TIER_1 · Xiangyue Zhang, Yiyi Cai, Kunhang Li, Kaixing Yang, You Zhou, Zhengqing Li, Xuangeng Chu, Jiaxu Zhang, Haiyang Liu ·

    PersonaGesture: Single-Reference Co-Speech Gesture Personalization for Unseen Speakers

    arXiv:2605.06064v1 Announce Type: new Abstract: We propose PersonaGesture, a diffusion-based pipeline for single-reference co-speech gesture personalization of unseen speakers. Given target speech and one motion clip from a new speaker, the model must synthesize gestures that fol…

  2. arXiv cs.CV TIER_1 · Haiyang Liu ·

    PersonaGesture: Single-Reference Co-Speech Gesture Personalization for Unseen Speakers

    We propose PersonaGesture, a diffusion-based pipeline for single-reference co-speech gesture personalization of unseen speakers. Given target speech and one motion clip from a new speaker, the model must synthesize gestures that follow the new utterance while retaining speaker-sp…