3260 papers • 126 benchmarks • 313 datasets
Auxiliary learning aims to find or design auxiliary tasks which can improve the performance on one or some primary tasks. ( Image credit: Self-Supervised Generalisation with Meta Auxiliary Learning )
(Image credit: Papersgraph)
These leaderboards are used to track progress in auxiliary-learning-19
No benchmarks available.
Use these libraries to find auxiliary-learning-19 models and implementations
No datasets available.
No subtasks available.
The proposed method, Meta AuXiliary Learning (MAXL), outperforms single-task learning on 7 image datasets, without requiring any additional data, and is even competitive when compared with human-defined auxiliary labels.
This work presents a convolutional neural network for estimating pixelwise object placement probabilities for a set of spatial relations from a single input image, and demonstrates the effectiveness of the method in reasoning about the best way to place objects to reproduce a spatial relation.
This work presents an approach for automatically generating a suite of auxiliary objectives by deconstructing existing objectives within a novel unified taxonomy, identifying connections between them, and generating new ones based on the uncovered structure, and theoretically formalizes widely-held intuitions about how auxiliary learning improves generalization on the end-task.
This work proposes VLocNet, a new convolutional neural network architecture for 6-DoF global pose regression and odometry estimation from consecutive monocular images, and proposes a novel loss function that utilizes auxiliary learning to leverage relative pose information during training, thereby constraining the search space to obtain consistent pose estimates.
In an experiment on a large-scale hyperparameter optimization task for 120 UCI datasets with varying schemas as a meta-learning task, it is shown that the meta-features of Dataset2Vec outperform the expert engineered meta- Features and thus demonstrate the usefulness of learned meta- features for datasets with varies schemas for the first time.
A novel framework, AuxiLearn, is proposed that targets both challenges of designing useful auxiliary tasks and combining auxiliary tasks into a single coherent loss, based on implicit differentiation.
This paper proposes a novel self-supervised auxiliary learning method using meta-paths, which are composite relations of multiple edge types, which can be viewed as a type of meta-learning to learn graph neural networks on heterogeneous graphs.
This paper designs first a meta-path prediction as a self-supervised auxiliary task for heterogeneous graphs and identifies an effective combination of auxiliary tasks and automatically balances them to improve the primary task.
This work proposes a novel weakly supervised multi-task framework termed as AuxSegNet, to leverage saliency detection and multi-label image classification as auxiliary tasks to improve the primary task of semantic segmentation using only image-level ground-truth labels.
It is proposed that agents will act to simplify their visual inputs so as to smooth their RNN dynamics, and that auxiliary tasks reduce overfitting by minimizing effective RNN dimensionality; i.e. a performant ObjectNav agent that must maintain coherent plans over long horizons does so by learning smooth, low-dimensional recurrent dynamics.
Adding a benchmark result helps the community track progress.