PulseAugur
LIVE 05:58:05
research · [8 sources] ·
0
research

New methods enhance robust optimization with ensemble models and worst-case distribution analysis

Researchers have developed new methods for distributionally robust optimization, a technique that accounts for uncertainty in data distributions. One approach, Ensemble Distributionally Robust Bayesian Optimization, uses an ensemble of models to improve robustness and achieve theoretical sublinear regret bounds. Another paper introduces distributionally robust multi-objective optimization (DR-MOO) with algorithms that minimize objectives under worst-case distributions, offering improved sample complexity. Additionally, a framework for distributionally-robust learning to optimize hyperparameters for first-order methods has been proposed, unifying classical learning to optimize with worst-case optimal algorithm design. AI

Summary written by gemini-2.5-flash-lite from 8 sources. How we write summaries →

IMPACT These advancements in robust optimization techniques could lead to more reliable and adaptable AI systems, particularly in scenarios with uncertain or shifting data distributions.

RANK_REASON Multiple academic papers published on arXiv detailing new methods in distributionally robust optimization.

Read on arXiv cs.LG →

COVERAGE [8]

  1. arXiv cs.LG TIER_1 · Denis Derkach ·

    Ensemble Distributionally Robust Bayesian Optimisation

    We study zeroth-order optimisation under context distributional uncertainty, a setting commonly tackled using Bayesian optimisation (BO). A prevailing strategy to make BO more robust to the complex and noisy nature of data is to employ an ensemble as the surrogate model, thereby …

  2. arXiv cs.LG TIER_1 · Yufeng Yang, Fangning Zhuo, Ziyi Chen, Heng Huang, Yi Zhou ·

    Distributionally Robust Multi-Objective Optimization

    arXiv:2605.05660v1 Announce Type: new Abstract: Multi-objective optimization (MOO) has received growing attention in applications that require learning under multiple criteria. However, the existing MOO formulations do not explicitly account for distributional shifts in the data.…

  3. arXiv cs.LG TIER_1 · Vinit Ranjan, Jisun Park, Bartolomeo Stellato ·

    Distributionally-Robust Learning to Optimize

    arXiv:2605.06585v1 Announce Type: new Abstract: We propose a distributionally robust approach to learning hyperparameters for first-order methods in convex optimization. Given a dataset of problem instances, we minimize a Wasserstein distributionally robust version of the perform…

  4. arXiv cs.LG TIER_1 · Bartolomeo Stellato ·

    Distributionally-Robust Learning to Optimize

    We propose a distributionally robust approach to learning hyperparameters for first-order methods in convex optimization. Given a dataset of problem instances, we minimize a Wasserstein distributionally robust version of the performance estimation problem (PEP) over algorithm par…

  5. arXiv cs.LG TIER_1 · Daphne Theodorakopoulos, Marcel Wever, Marius Lindauer ·

    Dynamic Hyperparameter Importance for Efficient Multi-Objective Optimization

    arXiv:2601.03166v2 Announce Type: replace Abstract: Choosing a suitable ML model is a complex task that can depend on several objectives, e.g., accuracy, fairness, or energy consumption. In practice, this requires trading off multiple, often competing, objectives through multi-ob…

  6. arXiv stat.ML TIER_1 · Rafael Oliveira ·

    Kernel-based guarantees for nonlinear parametric models in Bayesian optimization

    Modern Bayesian optimization and adaptive sampling methods increasingly rely on nonlinear parametric models, yet theoretical guarantees for such models under adaptive data collection remain limited. Existing analyses largely focus on Gaussian processes, kernel machines, linear mo…

  7. arXiv stat.ML TIER_1 · Hany Abdulsamad, Sahel Iqbal, Christian A. Naesseth, Takuo Matsubara, Adrien Corenflos ·

    Maximin Robust Bayesian Experimental Design

    arXiv:2603.14094v2 Announce Type: replace Abstract: We address the brittleness of Bayesian experimental design under model misspecification by formulating the problem as a max--min game between the experimenter and an adversarial nature subject to information-theoretic constraint…

  8. arXiv stat.ML TIER_1 · Tigran Ramazyan, Denis Derkach ·

    Ensemble Distributionally Robust Bayesian Optimisation

    arXiv:2605.07565v1 Announce Type: cross Abstract: We study zeroth-order optimisation under context distributional uncertainty, a setting commonly tackled using Bayesian optimisation (BO). A prevailing strategy to make BO more robust to the complex and noisy nature of data is to e…