3260 papers • 126 benchmarks • 313 datasets
The idea of Multi-target Domain Adaptation is to adapt a model from a single labelled source domain to multiple unlabelled target domains.
(Image credit: Papersgraph)
These leaderboards are used to track progress in multi-target-domain-adaptation-2
Use these libraries to find multi-target-domain-adaptation-2 models and implementations
No subtasks available.
The method performs very well in a series of image classification experiments, achieving adaptation effect in the presence of big domain shifts and outperforming previous state-of-the-art on Office datasets.
It is uncovered that reducing such pairwise class confusion leads to significant transfer gains, and a general loss function is proposed: Minimum Class Confusion (MCC), which can be characterized as a non-adversarial DA method without explicitly deploying domain alignment, enjoying faster convergence speed.
This paper devise a novel Deep Adversarial Disentangled Autoencoder (DADA) capable of disentangling domain-specific features from class identity and demonstrates experimentally that when the target domain labels are unknown, DADA leads to state-of-the-art performance on several image classification datasets.
This paper proposes a novel unsupervised MTDA approach to train a CNN that can generalize well across multiple target domains and relies on multi-teacher knowledge distillation (KD) to iteratively distill target domain knowledge from multiple teachers to a common student.
Empirical results show that BTDA is a quite challenging transfer setup for most existing DA algorithms, yet AMEAN significantly outperforms these state-of-the-art baselines and effectively restrains the negative transfer effects in BTDA.
This paper develops a co-teaching strategy with the dual classifier head that is assisted by curriculum learning to obtain more reliable pseudo-labels and proposes Domain-aware Curriculum Learning (DCL), a sequential adaptation strategy that first adapts on the easier target domains, followed by the harder ones.
This work proposes a new Incremental MTDA technique for object detection that can adapt a detector to multiple target domains, one at a time, without having to retain data of previously-learned target domains.
This work proposes foreground-aware image stylization and consensus pseudo-labeling for domain adaptation of hand segmentation and demonstrates promising results in challenging multi-target domain adaptation and domain generalization settings.
This work proposes a novel unsupervised multi-target domain adaptation framework, SEE, for transferring the performance of state-of-the-art 3D detectors across both fixed and flexible scan pattern lidars without requiring fine-tuning of models by end-users.
Adding a benchmark result helps the community track progress.