Researchers have developed new training regularizers for neural network surrogate models that directly improve their tractability within mixed-integer linear programs (MILPs). These regularizers penalize factors like big-M constants and unstable neurons, and explicitly address the LP relaxation gap. Experiments show these methods can reduce MILP solve times by up to four orders of magnitude while maintaining accuracy. AI
Summary written by gemini-2.5-flash-lite from 2 sources. How we write summaries →
IMPACT Novel training techniques could significantly accelerate optimization problems that use neural networks as surrogates.
RANK_REASON Academic paper introducing novel training regularizers for neural network surrogate models.