3260 papers • 126 benchmarks • 313 datasets
Detect human actions through walls and occlusions, and in poor lighting conditions. Taking radio frequency (RF) signals as input (e.g. Wifi), generating 3D human skeletons as an intermediate representation, and recognizing actions and interactions. See e.g. RF-Pose from MIT for a good illustration of the approach http://rfpose.csail.mit.edu/ ( Image credit: Making the Invisible Visible )
(Image credit: Papersgraph)
These leaderboards are used to track progress in rf-based-pose-estimation-7
Use these libraries to find rf-based-pose-estimation-7 models and implementations
No datasets available.
No subtasks available.
This paper introduces a global spatial aggregation scheme, which is able to learn superior joint co-occurrence features over local aggregation, and consistently outperforms other state-of-the-arts on action recognition and detection benchmarks like NTU RGB+D, SBU Kinect Interaction and PKU-MMD.
This work developed a deep learning approach that uses annotations on 2D images, takes the received 1D WiFi signals as input, and performs body segmentation and pose estimation in an end-to-end manner and is the first work based on off-the-shelf WiFi antennas and standard IEEE 802.11n WiFi signals.
A fully convolutional network (FCN) is proposed, termed WiSPPN, to estimate single person pose from the collected data and annotations and replies to the natural question: can WiFi devices work like cameras for vision applications?
Adding a benchmark result helps the community track progress.