PulseAugur
LIVE 11:05:38
research · [2 sources] ·
0
research

Researchers develop new training methods for neural networks to improve MILP tractability

Researchers have developed new training regularizers for neural network surrogate models that directly improve their tractability within mixed-integer linear programs (MILPs). These regularizers penalize factors like big-M constants and unstable neurons, and explicitly address the LP relaxation gap. Experiments show these methods can reduce MILP solve times by up to four orders of magnitude while maintaining accuracy. AI

Summary written by gemini-2.5-flash-lite from 2 sources. How we write summaries →

IMPACT Novel training techniques could significantly accelerate optimization problems that use neural networks as surrogates.

RANK_REASON Academic paper introducing novel training regularizers for neural network surrogate models.

Read on arXiv cs.LG →

COVERAGE [2]

  1. arXiv cs.LG TIER_1 · Calvin Tsay ·

    Relaxation-Informed Training of Neural Network Surrogate Models

    arXiv:2604.22746v1 Announce Type: cross Abstract: ReLU neural networks trained as surrogate models can be embedded exactly in mixed-integer linear programs (MILPs), enabling global optimization over the learned function. The tractability of the resulting MILP depends on structura…

  2. arXiv cs.LG TIER_1 · Calvin Tsay ·

    Relaxation-Informed Training of Neural Network Surrogate Models

    ReLU neural networks trained as surrogate models can be embedded exactly in mixed-integer linear programs (MILPs), enabling global optimization over the learned function. The tractability of the resulting MILP depends on structural properties of the network, i.e., the number of b…