3260 papers • 126 benchmarks • 313 datasets
Gaze Target Estimation refers to predicting the image 2D gaze location of a person in the image.
(Image credit: Papersgraph)
These leaderboards are used to track progress in gaze-target-estimation-2
No benchmarks available.
Use these libraries to find gaze-target-estimation-2 models and implementations
No subtasks available.
This paper addresses the gaze target detection problem in single images captured from the third-person perspective. We present a multimodal deep architecture to infer where a person in a scene is looking. This spatial model is trained on the head images of the person-of-interest, scene and depth maps representing rich context information. Our model, unlike several prior art, do not require supervision of the gaze angles, do not rely on head orientation information and/or location of the eyes of person-of-interest. Extensive experiments demonstrate the stronger performance of our method on multiple benchmark datasets. We also investigated several variations of our method by altering joint-learning of multimodal data. Some variations outperform a few prior art as well. First time in this paper, we inspect domain adaptation for gaze target detection, and we empower our multimodal network to effectively handle the domain gap across datasets. The code of the proposed method is available at https://github.com/francescotonini/multimodal-across-domains-gaze-target-detection.
This study proposes the Gaze-grounded VQA dataset (GazeVQA) that clarifies ambiguous questions using gaze information by focusing on a clarification process complemented by gaze information and proposes a method that utilizes gaze target estimation results to improve the accuracy of GazeVQA tasks.
Adding a benchmark result helps the community track progress.