Researchers have developed a new method for active learning in machine learning interatomic potentials (MLIPs) by utilizing pretrained model representations. This approach leverages the latent space of a pretrained MACE potential to generate acquisition signals, specifically a finite-width neural tangent kernel and an activation kernel. These signals effectively reduce the data required for training, showing an average reduction of 38% for energy error and 28% for force error on reactive chemistry benchmarks. AI
Summary written by gemini-2.5-flash-lite from 2 sources. How we write summaries →
IMPACT This method could significantly reduce the data requirements for training MLIPs, accelerating research in materials science and chemistry.
RANK_REASON This is a research paper detailing a new method for active learning in MLIPs.