3260 papers • 126 benchmarks • 313 datasets
Expensive black-box functions are a common problem in many disciplines, including tuning the parameters of machine learning algorithms, robotics, and other engineering design problems. Bayesian Optimisation is a principled and efficient technique for the global optimisation of these functions. The idea behind Bayesian Optimisation is to place a prior distribution over the target function and then update that prior with a set of “true” observations of the target function by expensively evaluating it in order to produce a posterior predictive distribution. The posterior then informs where to make the next observation of the target function through the use of an acquisition function, which balances the exploitation of regions known to have good performance with the exploration of regions where there is little information about the function’s response. Source: A Bayesian Approach for the Robust Optimisation of Expensive-to-Evaluate Functions
(Image credit: Papersgraph)
These leaderboards are used to track progress in bayesian-optimisation-3
No benchmarks available.
Use these libraries to find bayesian-optimisation-3 models and implementations
No datasets available.
It is observed that HEBO significantly outperforms existing black-box optimisers on 108 machine learning hyperparameter tuning tasks comprising the Bayesmark benchmark.
It is observed that MES maintains or improves the good empirical performance of ES/PES, while tremendously lightening the computational burden, and is much more robust to the number of samples used for computing the entropy, and hence more efficient for higher dimensional problems.
This paper proposes that DCBO can act as a blue agent when provided with a view of a simulated network and a causal model of how a red agent spreads within that network, and demonstrates a complete cyber-simulation system used to generate observational data for DCBO.
This work proposes a computationally efficient novel method based on the combination of Gaussian process optimisation and SMC-ABC to create a Laplace approximation of the intractable posterior of stochastic volatility models with both synthetic and real-world data.
This work proposes a new approach, Continuous and Categorical Bayesian Optimisation (CoCaBO), which combines the strengths of multi-armed bandits and Bayesian optimisation to select values for both categorical and continuous inputs, and model this mixed-type space using a Gaussian Process kernel.
It is found empirically that pathologies of a similar form as in the single-hidden layer case can persist when performing variational inference in deeper networks, and a universality result is proved showing that there exist approximate posteriors in the above classes which provide flexible uncertainty estimates.
A method combining variational autoencoders (VAEs) and deep metric learning to perform Bayesian optimisation (BO) over high-dimensional and structured input spaces is introduced, using label guidance from the blackbox function to structure the VAE latent space, facilitating the Gaussian process fit and yielding improved BO performance.
This work is the first to investigate casting NAS as a problem of finding the optimal network generator and proposes a new, hierarchical and graph-based search space capable of representing an extremely large variety of network types, yet only requiring few continuous hyper-parameters.
This paper formally shows that a random tree-based decomposition sampler exhibits favourable theoretical guarantees that effectively trade off maximal information gain and functional mismatch between the actual black-box and its surrogate as provided by the decomposition.
Adding a benchmark result helps the community track progress.