PulseAugur
LIVE 13:07:18
research · [2 sources] ·
0
research

Geometric Unlearning enables LLMs to remove data with minimal disclosure

Researchers have introduced Geometric Unlearning (GU), a novel method for selectively removing specific information from large language models without needing access to the original training data. This approach operates on the model's internal planning states, distilling safe behavior from a small set of reference prompts. GU then uses synthetic prompts to align these states with the desired safe geometry, minimizing impact on the model's general utility. Experiments on privacy benchmarks demonstrated GU's effectiveness in suppressing target information with minimal synthetic data. AI

Summary written by gemini-2.5-flash-lite from 2 sources. How we write summaries →

IMPACT Offers a more efficient and data-light approach to LLM unlearning, potentially improving privacy compliance for deployed models.

RANK_REASON The cluster contains an arXiv preprint detailing a new method for LLM unlearning.

Read on arXiv cs.CL →

COVERAGE [2]

  1. arXiv cs.CL TIER_1 · Chenchen Tan, Xinghao Li, Shujie Cui, Youyang Qu, Cunjian Chen, Longxiang Gao ·

    Less is More: Geometric Unlearning for LLMs with Minimal Data Disclosure

    arXiv:2605.01735v1 Announce Type: new Abstract: As large language models (LLMs) are increasingly deployed in real-world systems, they must support post-hoc removal of specific content to meet privacy and governance requirements. This motivates selective unlearning, which suppress…

  2. arXiv cs.CL TIER_1 · Longxiang Gao ·

    Less is More: Geometric Unlearning for LLMs with Minimal Data Disclosure

    As large language models (LLMs) are increasingly deployed in real-world systems, they must support post-hoc removal of specific content to meet privacy and governance requirements. This motivates selective unlearning, which suppresses information about a particular entity or topi…