3260 papers • 126 benchmarks • 313 datasets
This task has no description! Would you like to contribute one?
(Image credit: Papersgraph)
These leaderboards are used to track progress in fingertip-detection-3
No benchmarks available.
Use these libraries to find fingertip-detection-3 models and implementations
No subtasks available.
An approach of a two-stage convolutional neural network, one for the detection of hand and another for the fingertips, which outperforms the existing systems in terms of the precision of detection and interaction performance in the VR environment.
This paper presents an approach of two-stage convolutional neural network (CNN) for detection of fingertips so that an interaction of the fingertips with a 3D object in the virtual environment (VR) can be established. The first-stage CNN is assigned to detect and locate the hand. Subsequently, the detected hand is cropped, resized, and fed to the second stage CNN for predicting the coordinates of fingertips. Next, a tracker is employed to track the hand continuously so that the system becomes reliable in real-time performance. The VR environments are designed to demonstrate the performance of the fingertip-based interaction system. The proposed method focuses on the geometric transformation of a virtual 3D object by using the gesture of the thumb and index finger. In particular, the distance of the thumb and index fingertips is employed to scale a 3D object in virtual environment. To realize the system, a dataset of 1000 images, named, Thumb Index 1000 (TI1K) dataset, is developed including those variations that are commonly-seen in real-life thumb and index fingers. The system is evaluated with the aid of a number of participants and virtual objects that are distinctive in nature. The proposed approach attains the desired goal and performs in real-time seamlessly to facilitate the human-computer interaction in the VR environment.
Experimental results show that the proposed method outperforms the existing fingertip detection approaches including the Direct Regression and the Heatmap-based framework.
This paper presents a novel approach to the digital signing of electronic documents through the use of a camera-based interaction system, single-finger tracking for sign recognition, and multi commands executing hand gestures. The proposed solution, referred to as"Air Signature,"involves writing the signature in front of the camera, rather than relying on traditional methods such as mouse drawing or physically signing on paper and showing it to a web camera. The goal is to develop a state-of-the-art method for detecting and tracking gestures and objects in real-time. The proposed methods include applying existing gesture recognition and object tracking systems, improving accuracy through smoothing and line drawing, and maintaining continuity during fast finger movements. An evaluation of the fingertip detection, sketching, and overall signing process is performed to assess the effectiveness of the proposed solution. The secondary objective of this research is to develop a model that can effectively recognize the unique signature of a user. This type of signature can be verified by neural cores that analyze the movement, speed, and stroke pixels of the signing in real time. The neural cores use machine learning algorithms to match air signatures to the individual's stored signatures, providing a secure and efficient method of verification. Our proposed System does not require sensors or any hardware other than the camera.
Adding a benchmark result helps the community track progress.