3260 papers • 126 benchmarks • 313 datasets
This task has no description! Would you like to contribute one?
(Image credit: Papersgraph)
These leaderboards are used to track progress in robust-design-5
No benchmarks available.
Use these libraries to find robust-design-5 models and implementations
No datasets available.
No subtasks available.
Robust Vision Transformer (RVT) is proposed, which is a new vision transformer and has superior performance with strong robustness and generalization ability compared with previous ViTs and state-of-the-art CNNs.
This work compares eight state-of-the-art DIMs on three unique AEM design problems, including two models that are novel to the AEM community, and indicates that D IMs can rapidly produce accurate designs to achieve a custom desired scattering on all three problems.
This work proposes a new method, Conditioning by Adaptive Sampling, which yields state-of-the-art results on a protein fluorescence problem, as compared to other recently published approaches.
This work treats each individual layer of the DNN as a nonlinear system and uses Lyapunov theory to prove stability and robustness locally and globally, and develops empirically tight bounds on the response of the output layer, or any hidden layer, to adversarial perturbations added to the input, or the input of hidden layers.
The use of Reinforcement Learning for the robust design of low-thrust interplanetary trajectories in presence of severe disturbances, modeled alternatively as Gaussian additive process noise, observation noise, control actuation errors on thrust magnitude and direction, and possibly multiple missed thrust events is investigated.
A novel controller synthesis for linearized GP dynamics that yields robust controllers with respect to a probabilistic stability margin is presented, based on a recently proposed algorithm for linear quadratic control synthesis and extended by giving probabilism robustness guarantees in the form of credibility bounds for the system’s stability.
It is shown that robustness performance of DANNs can be quantified before training using graph structural properties such as topological entropy and Olivier-Ricci curvature, with the greatest reliability for complex tasks and large DANN's.
Proposed local Latin hypercube refinement (LoLHR) strategy is model-agnostic and can be combined with any surrogate model because there is no free lunch but possibly a budget one and achieves on average better results compared to other surrogate based strategies on the tested examples.
This work proposes Robust Inverse Design under Noise (RID-Noise), which can utilize existing data to train a conditional invertible neural network, and estimates the robustness of a design parameter by its predictability, measured by the prediction error of a forward neural network.
Adding a benchmark result helps the community track progress.