PulseAugur
LIVE 10:28:29
research · [1 source] ·
0
research

EngramaBench evaluates long-term conversational memory for LLMs

Researchers have introduced EngramaBench, a new benchmark designed to evaluate the long-term conversational memory capabilities of large language models. The benchmark features five distinct personas and one hundred multi-session conversations, with queries testing factual recall, temporal reasoning, and synthesis. In evaluations, GPT-4o with full-context prompting achieved the highest overall score, though a graph-structured memory system called Engrama demonstrated superior performance in cross-space reasoning. AI

Summary written by gemini-2.5-flash-lite from 1 source. How we write summaries →

IMPACT Introduces a new benchmark for evaluating LLM long-term memory, potentially guiding future memory system development.

RANK_REASON This is a research paper introducing a new benchmark for evaluating LLM memory.

Read on arXiv cs.CL →

EngramaBench evaluates long-term conversational memory for LLMs

COVERAGE [1]

  1. arXiv cs.CL TIER_1 · Julian Acuna ·

    EngramaBench: Evaluating Long-Term Conversational Memory with Structured Graph Retrieval

    Large language model assistants are increasingly expected to retain and reason over information accumulated across many sessions. We introduce EngramaBench, a benchmark for long-term conversational memory built around five personas, one hundred multi-session conversations, and on…