PulseAugur
LIVE 09:34:38
research · [2 sources] ·
0
research

Neural ODEs advance with mixed precision training and causal forecasting methods

Researchers have developed a new mixed-precision training framework for Neural Ordinary Differential Equations (Neural ODEs) to reduce computational costs. This framework uses low-precision computations for evaluating network outputs and storing intermediate states, while maintaining numerical stability through custom scaling and high-precision accumulation of solutions and gradients. The approach, accompanied by an open-source PyTorch package named "rampde", achieves approximately 50% memory reduction and up to a 2x speedup in tasks like image classification and generative modeling, with accuracy comparable to single-precision training. AI

Summary written by gemini-2.5-flash-lite from 2 sources. How we write summaries →

IMPACT Introduces a method to significantly reduce memory and speed up training for Neural ODEs, potentially enabling larger and more complex continuous-time models.

RANK_REASON This is a research paper detailing a new training method for a specific type of neural network architecture.

Read on arXiv cs.AI →

COVERAGE [2]

  1. arXiv cs.AI TIER_1 · Elena Celledoni, Brynjulf Owren, Lars Ruthotto, Tianjiao Nicole Yang ·

    Mixed Precision Training of Neural ODEs

    arXiv:2510.23498v2 Announce Type: replace-cross Abstract: Exploiting low-precision computations has become a standard strategy in deep learning to address the growing computational costs imposed by ever larger models and datasets. However, naively performing all computations in l…

  2. Hugging Face Daily Papers TIER_1 ·

    Observable Neural ODEs for Identifiable Causal Forecasting in Continuous Time

    Causal inference in continuous-time sequential decision problems is challenged by hidden confounders. We show that, in latent state-space models with time-varying interventions, observability of the latent dynamics from observed data is necessary for identifying dynamic treatment…