PulseAugur
LIVE 15:22:27
research · [1 source] ·
0
research

LLM decoders show mixed fairness in speech recognition, audio encoder design is key

A new paper investigates how large language model (LLM) decoders impact fairness in speech recognition systems. Researchers found that LLM decoders do not necessarily amplify racial bias, with one model even showing improved ethnicity fairness. However, certain models like Whisper exhibited significant issues with specific accents and under acoustic degradation, sometimes leading to pathological hallucination or repetition loops. The study suggests that audio encoder design, rather than LLM scale, is more critical for achieving equitable and robust speech recognition. AI

Summary written by gemini-2.5-flash-lite from 1 source. How we write summaries →

RANK_REASON The submission is an academic paper on arXiv evaluating LLM decoders for bias in speech recognition.

Read on arXiv cs.CL →

LLM decoders show mixed fairness in speech recognition, audio encoder design is key

COVERAGE [1]

  1. arXiv cs.CL TIER_1 · Srinivasan Parthasarathy ·

    Do LLM Decoders Listen Fairly? Benchmarking How Language Model Priors Shape Bias in Speech Recognition

    As pretrained large language models replace task-specific decoders in speech recognition, a critical question arises: do their text-derived priors make recognition fairer or more biased across demographic groups? We evaluate nine models spanning three architectural generations (C…