PulseAugur
LIVE 06:20:31
research · [2 sources] ·
0
research

TextPro-SLM reduces speech LLM modality gap by enhancing input processing

Researchers have developed TextPro-SLM, a novel speech large language model (SLM) designed to minimize the modality gap between spoken and text-based inputs. Unlike previous approaches focusing on output generation, TextPro-SLM addresses the input side by making spoken language more akin to prosody-aware text LLMs. The model integrates a unified speech encoder with an LLM backbone, achieving state-of-the-art performance on paralinguistic understanding tasks with significantly less training data. AI

Summary written by gemini-2.5-flash-lite from 2 sources. How we write summaries →

IMPACT This research could lead to more accurate and efficient speech-to-text models by focusing on input processing rather than output generation.

RANK_REASON The cluster contains an arXiv preprint detailing a new model and methodology.

Read on arXiv cs.CL →

COVERAGE [2]

  1. arXiv cs.CL TIER_1 · Wenqian Cui, Xiao-Hui Li, Daxin Tan, Qiyong Zheng, Irwin King ·

    Minimizing Modality Gap from the Input Side: Your Speech LLM Can Be a Prosody-Aware Text LLM

    arXiv:2605.05927v1 Announce Type: new Abstract: Speech large language models (SLMs) are typically built from text large language model (TLM) checkpoints, yet they still suffer from a substantial modality gap. Prior work has mainly attempted to reduce this gap from the output side…

  2. arXiv cs.CL TIER_1 · Irwin King ·

    Minimizing Modality Gap from the Input Side: Your Speech LLM Can Be a Prosody-Aware Text LLM

    Speech large language models (SLMs) are typically built from text large language model (TLM) checkpoints, yet they still suffer from a substantial modality gap. Prior work has mainly attempted to reduce this gap from the output side by making speech generation more text-like, but…