PulseAugur
LIVE 13:04:49
research · [2 sources] ·
0
research

LLMs adapt to neurodivergence context with structured output changes

A new research framework called NDBench has been developed to measure how frontier large language models adapt to neurodivergence (ND) contexts within system prompts. The study found that LLMs significantly alter their outputs, producing lengthier and more structured responses when explicitly instructed to do so. However, simply asserting an ND persona without explicit instructions was insufficient to mitigate potentially harmful tendencies in the models' outputs. AI

Summary written by gemini-2.5-flash-lite from 2 sources. How we write summaries →

IMPACT Provides a framework for auditing LLM adaptation to neurodivergence, potentially influencing future model development and safety evaluations.

RANK_REASON Academic paper introducing a new benchmark and measurement framework for LLM adaptation.

Read on arXiv cs.CL →

COVERAGE [2]

  1. arXiv cs.CL TIER_1 · Ishan Gupta, Pavlo Buryi ·

    How Frontier LLMs Adapt to Neurodivergence Context: A Measurement Framework for Surface vs. Structural Change in System-Prompted Responses

    arXiv:2605.00113v1 Announce Type: new Abstract: We examine if frontier chat-based large language models (LLMs) adjust their outputs based on neurodivergence (ND) context in system prompts and describe the nature of these adjustments. Specifically, we propose NDBench, a 576-output…

  2. arXiv cs.CL TIER_1 · Pavlo Buryi ·

    How Frontier LLMs Adapt to Neurodivergence Context: A Measurement Framework for Surface vs. Structural Change in System-Prompted Responses

    We examine if frontier chat-based large language models (LLMs) adjust their outputs based on neurodivergence (ND) context in system prompts and describe the nature of these adjustments. Specifically, we propose NDBench, a 576-output benchmark involving two frontier models, three …