3260 papers • 126 benchmarks • 313 datasets
Bilevel Optimization is a branch of optimization, which contains a nested optimization problem within the constraints of the outer optimization problem. The outer optimization task is usually referred as the upper level task, and the nested inner optimization task is referred as the lower level task. The lower level problem appears as a constraint, such that only an optimal solution to the lower level optimization problem is a possible feasible candidate to the upper level optimization problem. Source: Efficient Evolutionary Algorithm for Single-Objective Bilevel Optimization
(Image credit: Papersgraph)
These leaderboards are used to track progress in bilevel-optimization-8
No benchmarks available.
Use these libraries to find bilevel-optimization-8 models and implementations
No datasets available.
No subtasks available.
OptNet is presented, a network architecture that integrates optimization problems (here, specifically in the form of quadratic programs) as individual layers in larger end-to-end trainable deep networks, and shows how techniques from sensitivity analysis, bilevel optimization, and implicit differentiation can be used to exactly differentiate through these layers.
Information theoretically, it is proved that the mixture of local and global models can reduce the generalization error and a communication-reduced bilevel optimization method is proposed, which reduces the communication rounds to $O(\sqrt{T})$ and can achieve a convergence rate of $O(1/T)$ with some residual error.
This work aims to adapt regularization hyperparameters for neural networks by fitting compact approximations to the best-response function, which mapshyperparameters to optimal weights and biases, and outperforms competing hyperparameter optimization methods on large-scale deep learning problems.
A second-order approach to bilevel optimization, a type of mathematical programming in which the solution to a parameterized optimization problem is itself to be optimized as a function of the parameters, is derived.
It is found that optimization with the approximate gradient computed using few-step back-propagation often performs comparably to optimized with the exact gradient, while requiring far less memory and half the computation time.
This paper proposes a new method for solving bilevel optimization problems using the classical penalty function approach which avoids computing the inverse and can also handle additional constraints easily and proves the convergence of the method under mild conditions and shows that the exact hypergradient is obtained asymptotically.
This work poses crafting poisons more generally as a bi-level optimization problem, where the inner level corresponds to training a network on a poisoned dataset and the outer level corresponding to updating those poisons to achieve a desired behavior on the trained model, and proposes MetaPoison, a first-order method to solve this optimization quickly.
This work proposes an efficient approach to automatically train a network that learns an effective distribution of transformations to improve its generalization and produces an image classification accuracy that is comparable to or better than carefully hand-crafted data augmentation.
This paper proposes and forms the graph adversarial immunization problem, i.e., vaccinating an affordable fraction of node pairs, connected or unconnected, to improve the certifiable robustness of graph against any admissible adversarial attack and proposes an effective algorithm, called AdvImmune, which optimizes with meta-gradient in a discrete way to circumvent the computationally expensive combinatorial optimization when solving the adversarial Immunization problem.
Adding a benchmark result helps the community track progress.