3260 papers • 126 benchmarks • 313 datasets
This task has no description! Would you like to contribute one?
(Image credit: Papersgraph)
These leaderboards are used to track progress in multiview-gait-recognition-10
Use these libraries to find multiview-gait-recognition-10 models and implementations
No subtasks available.
A new network named GaitSet is proposed to learn identity information from the set, which is immune to permutation of frames, and can naturally integrate frames from different videos which have been filmed under different scenarios, such as diverse viewing angles, different clothes/carrying conditions.
Gait recognition, applied to identify individual walking patterns in a long-distance, is one of the most promising video-based biometric technologies. At present, most gait recognition methods take the whole human body as a unit to establish the spatio-temporal representations. However, we have observed that different parts of human body possess evidently various visual appearances and movement patterns during walking. In the latest literature, employing partial features for human body description has been verified being beneficial to individual recognition. Taken above insights together, we assume that each part of human body needs its own spatio-temporal expression. Then, we propose a novel part-based model GaitPart and get two aspects effect of boosting the performance: On the one hand, Focal Convolution Layer, a new applying of convolution, is presented to enhance the fine-grained learning of the part-level spatial features. On the other hand, the Micro-motion Capture Module (MCM) is proposed and there are several parallel MCMs in the GaitPart corresponding to the pre-defined parts of the human body, respectively. It is worth mentioning that the MCM is a novel way of temporal modeling for gait task, which focuses on the short-range temporal features rather than the redundant long-range features for cycle gait. Experiments on two of the most popular public datasets, CASIA-B and OU-MVLP, richly exemplified that our method meets a new state-of-the-art on multiple standard benchmarks. The source code will be available on https://github.com/ChaoFan96/GaitPart.
GaitGraph is proposed that combines skeleton poses with Graph Convolutional Network (GCN) to obtain a modern model-based approach for gait recognition, which has the main advantages of a cleaner, more elegant extraction of the gait features and the ability to incorporate powerful spatiotemporal modeling using GCN.
A multi-scale context-aware network with transformer (MCAT) for gait recognition that generates temporal features across three scales, and adaptively aggregates them using contextual information from both local and global perspectives.
A Clothes-based Adversarial Loss (CAL) is proposed to mine clothes-irrelevant features from the original RGB images by penalizing the predictive power of re-id model w.r.t. clothes.
This study designs a combined full-body and fine-grained sequence learning module (FFSL) to explore part-independent spatio-temporal representations and utilizes a frame-wise compression strategy, referred to as multi-scale motion aggregation (MSMA), to capture discriminative information in the gait sequence.
A novel network model is proposed, GaitMixer, to learn more discriminative gait representation from skeleton sequence data, which follows a heterogeneous multi-axial mixer architecture, which exploits the spatial self-attention mixer followed by the temporal large-kernel convolution mixer to learn rich multi-frequency signals in the gait feature maps.
This paper combines the silhouettes and skeletons and refine the framewise joint predictions for gait recognition with temporal information from the silhouette sequences, and shows that the refined skeletons can improve gait recognition performance without extra annotations.
Adding a benchmark result helps the community track progress.