PulseAugur
LIVE 15:14:41
research · [4 sources] ·
0
research

New research explores advanced LLM memory systems and hybrid processing architectures

Researchers have developed MEMAUDIT, a novel evaluation protocol for assessing the long-term memory writing capabilities of LLM agents. This protocol separates memory writing performance from retrieval and reasoning, allowing for a more precise analysis of how agents compress past interactions into persistent memory under budget constraints. Concurrently, a new hybrid processing-using-memory architecture called DARTH-PUM has been proposed, which integrates analog and digital processing-using-memory techniques to enable general-purpose computation within memory arrays. This architecture shows significant speedups for applications including large language models, AES encryption, and convolutional neural networks. AI

Summary written by gemini-2.5-flash-lite from 4 sources. How we write summaries →

IMPACT New evaluation protocols and hardware architectures could accelerate the development of more capable and efficient LLM agents.

RANK_REASON This cluster contains multiple research papers detailing new evaluation protocols and hardware architectures for LLMs.

Read on arXiv cs.CL →

COVERAGE [4]

  1. arXiv cs.AI TIER_1 · Nishant Bhargava, Rodrigo Sobral Barrento ·

    MEMAUDIT: An Exact Package-Oracle Evaluation Protocol for Budgeted Long-Term LLM Memory Writing

    arXiv:2605.02199v1 Announce Type: new Abstract: Long-term LLM agents must compress streams of past interactions into persistent memory before future queries are known. Existing evaluations usually measure final question-answering accuracy, which entangles memory writing with retr…

  2. arXiv cs.LG TIER_1 · Ryan Wong, Ben Feinberg, Saugata Ghose ·

    DARTH-PUM: A Hybrid Processing-Using-Memory Architecture

    arXiv:2602.16075v2 Announce Type: replace-cross Abstract: Analog processing-using-memory (PUM; a.k.a. in-memory computing) makes use of electrical interactions inside memory arrays to perform bulk matrix-vector multiplication (MVM) operations. However, many popular matrix-based k…

  3. arXiv cs.CL TIER_1 · Yanchen Wu, Tenghui Lin, Yingli Zhou, Fangyuan Zhang, Qintian Guo, Xun Zhou, Sibo Wang, Xilin Liu, Yuchi Ma, Yixiang Fang ·

    Memory in the LLM Era: Modular Architectures and Strategies in a Unified Framework

    arXiv:2604.01707v2 Announce Type: replace Abstract: Memory emerges as the core module in the large language model (LLM)-based agents for long-horizon complex tasks (e.g., multi-turn dialogue, game playing, scientific discovery), where memory can enable knowledge accumulation, ite…

  4. Medium — Claude tag TIER_1 · Dhodraj Sundaram ·

    Twenty Repos, One Self-Updating Architectural Memory

    <div class="medium-feed-item"><p class="medium-feed-image"><a href="https://medium.com/@dhodrajsdr192/twenty-repos-one-self-updating-architectural-memory-68678e1af5c9?source=rss------claude-5"><img src="https://cdn-images-1.medium.com/max/1774/1*gd83sQ4lKCdvo6undYO8Yg.png" width=…