Researchers have developed Adaptive Tree-of-Retrieval (Adaptive ToR), a novel architecture for multi-intent natural language understanding that optimizes for both accuracy and computational efficiency. This system dynamically adjusts its retrieval topology based on query complexity, routing simpler queries through a fast single-step path and more complex ones through an adaptive-depth hierarchical decomposition. Evaluations on the NLU++ benchmark demonstrated a 9.7% relative improvement in accuracy over existing methods, while simultaneously reducing latency by 37.6% and LLM invocations by 43.0%. The approach aims to achieve a Pareto-optimal balance between performance metrics and resource consumption. AI
Summary written by gemini-2.5-flash-lite from 2 sources. How we write summaries →
IMPACT Improves efficiency for multi-intent NLU systems, potentially reducing operational costs and latency.
RANK_REASON Academic paper detailing a new NLU architecture with benchmark results.