3260 papers • 126 benchmarks • 313 datasets
Heterogeneous face recognition is the task of matching face images acquired from different sources (i.e., different sensors or different wavelengths) for identification or verification. ( Image credit: Pose Agnostic Cross-spectral Hallucination via Disentangling Independent Factors )
(Image credit: Papersgraph)
These leaderboards are used to track progress in heterogeneous-face-recognition
Use these libraries to find heterogeneous-face-recognition models and implementations
No subtasks available.
This work proposes a surprisingly simple, yet, very effective method for matching face images across different sensing modalities by adding a novel neural network block called Prepended Domain Transformer in front of a pre-trained face recognition (FR) model to address the domain gap.
This paper considers HFR as a dual generation problem, and proposes a novel Dual Variational Generation (DVG) framework that generates large-scale new paired heterogeneous images with the same identity from noise, for the sake of reducing the domain gap of HFR.
Recent advancements in deep learning have significantly increased the capabilities of face recognition. However, face recognition in an unconstrained environment is still an active research challenge. Covariates such as pose and low resolution have received significant attention, but “disguise” is considered an onerous covariate of face recognition. One primary reason for this is the unavailability of large and representative databases. To address the problem of recognizing disguised faces, we propose an active learning framework A-LINK*, that intelligently selects training samples from the target domain data, such that the decision boundary does not overfit to a particular set of variations, and better generalizes to encode variability. The framework further applies domain adaptation with the actively selected training samples to fine-tune the network. We demonstrate the effectiveness of the proposed framework on DFW and Multi-PIE datasets with state-of-the-art models such as LCSSE and DenseNet.
Face recognition in the unconstrained environment is an ongoing research challenge. Although several covariates of face recognition such as pose and low resolution have received significant attention, “disguise” is considered an onerous covariate of face recognition. One of the primary reasons for this is the scarcity of large and representative labeled databases, along with the lack of algorithms that work well for multiple covariates in such environments. In order to address the problem of face recognition in the presence of disguise, the paper proposes an active learning framework termed as A2-LINK. Starting with a face recognition machine-learning model, A2-LINK intelligently selects training samples from the target domain to be labeled and, using hybrid noises such as adversarial noise, fine-tunes a model that works well both in the presence and absence of disguise. Experimental results demonstrate the effectiveness and generalization of the proposed framework on the DFW and DFW2019 datasets with state-of-the-art deep learning featurization models such as LCSSE, ArcFace, and DenseNet.
This paper formulate HFR as a dual generation problem, and tackle it via a novel dual variational generation (DVG-Face) framework, and achieves superior performances over state-of-the-art methods on seven challenging databases belonging to five HFR tasks, including NIR-VIS, Sketch-Photo, Profile-Frontal Photo, Thermal-VIS and ID-Camera.
This work introduces a novel Conditional Adaptive Instance Modulation (CAIM) module that can be integrated into pre-trained FR networks, transforming them into HFR networks and proposes a framework to adapt feature maps, bridging the domain gap.
Adding a benchmark result helps the community track progress.