3260 papers • 126 benchmarks • 313 datasets
This task has no description! Would you like to contribute one?
(Image credit: Papersgraph)
These leaderboards are used to track progress in self-learning-10
No benchmarks available.
Use these libraries to find self-learning-10 models and implementations
No subtasks available.
This paper presents the proof of concept for autonomous self-learning robot navigation in an unknown environment for a real robot without a map or planner and trains on environments of varying difficulties and run 32 training instances simultaneously to increase the robustness.
Based on the privacy-preserving domain adaptation, various stakeholders, including enterprises and government organizations, can be free of concern about privacy issues with their labeled source dataset, and the proposed data-free approach can contribute to creating a positive social impact, especially in large-scale datasets.
This work presents a self-learning approach for synthesizing programs from integer sequences that relies on a tree search guided by a learned policy and discovers solutions for 27987 sequences starting from basic operators and without human-written training examples.
This work proposes an alternative approach based on a fully unsupervised initialization that explicitly exploits the structural similarity of the embeddings, and a robust self-learning algorithm that iteratively improves this solution.
This notebook paper presents an overview and comparative analysis of the systems designed for the following two tasks in Visual Domain Adaptation Challenge (VisDA-2019): multi-source domain adaptation and semi-supervised domain adaptation.
A novel generative model for images of faces, that is capable of producing high-quality images under fine-grained control over eye gaze and head orientation angles, and which learns to discover, disentangle and encode these extraneous variations in a self-learned manner is proposed.
A pre-training framework named “knowledge inheritance” (KI) is introduced and how could knowledge distillation serve as auxiliary supervision during pre- training to efficiently learn larger PLMs is explored, demonstrating the superiority of KI in training efficiency.
It is found that transfer learning always substantially improves the model's accuracy when few labeled examples are available, regardless of the type of loss used for training the neural network.
This paper proposes to overcome diminishing returns of silver data by combining Smatch-based ensembling techniques with ensemble distillation, and shows that it can produce gains rivaling those of human annotated data for QALD-9 and achieve a new state-of-the-art for BioAMR.
Simulation results demonstrate that the proposed self-improving artificial intelligence system efficiently discovers safety failures of action decisions in RL-based adaptive cruise control (ACC) applications and significantly reduces the number of vehicle collisions through iterative applications of the method.
Adding a benchmark result helps the community track progress.