PulseAugur
LIVE 13:05:04
research · [2 sources] ·
0
research

New kernels from pretrained MACE potentials improve active learning for MLIPs

Researchers have developed a new method for active learning in machine learning interatomic potentials (MLIPs) by utilizing pretrained model representations. This approach leverages the latent space of a pretrained MACE potential to generate acquisition signals, specifically a finite-width neural tangent kernel and an activation kernel. These signals effectively reduce the data required for training, showing an average reduction of 38% for energy error and 28% for force error on reactive chemistry benchmarks. AI

Summary written by gemini-2.5-flash-lite from 2 sources. How we write summaries →

IMPACT This method could significantly reduce the data requirements for training MLIPs, accelerating research in materials science and chemistry.

RANK_REASON This is a research paper detailing a new method for active learning in MLIPs.

Read on arXiv cs.LG →

COVERAGE [2]

  1. arXiv cs.LG TIER_1 · Eszter Varga-Umbrich, Shikha Surana, Paul Duckworth, Jules Tilly, Olivier Peltre, Zachary Weller-Davies ·

    Pretrained Model Representations as Acquisition Signals for Active Learning of MLIPs

    arXiv:2605.03964v1 Announce Type: new Abstract: Training machine learning interatomic potentials (MLIPs) for reactive chemistry is often bottlenecked by the high cost of quantum chemical labels and the scarcity of transition state configurations in candidate pools. Active learnin…

  2. arXiv cs.LG TIER_1 · Zachary Weller-Davies ·

    Pretrained Model Representations as Acquisition Signals for Active Learning of MLIPs

    Training machine learning interatomic potentials (MLIPs) for reactive chemistry is often bottlenecked by the high cost of quantum chemical labels and the scarcity of transition state configurations in candidate pools. Active learning (AL) can mitigate these costs, but its effecti…