3260 papers • 126 benchmarks • 313 datasets
This task has no description! Would you like to contribute one?
(Image credit: Papersgraph)
These leaderboards are used to track progress in multi-person-pose-forecasting
Use these libraries to find multi-person-pose-forecasting models and implementations
No subtasks available.
A simple feed-forward deep network for motion prediction, which takes into account both temporal smoothness and spatial dependencies among human body joints, and design a new graph convolutional network to learn graph connectivity automatically.
A novel cross interaction attention mechanism that exploits historical information of both persons, and learns to predict cross dependencies between the two pose sequences to predict the future motion of two interacted persons given two sequences of their past skeletons.
A Multi-Range Transformers model is introduced which contains of a local-range encoder for individual motion and a global-rangeEncoder for social interactions, which outperforms state-of-the-art methods on long-term 3D motion prediction, but also generates diverse social interactions.
SoMoFormer outperforms state-of-the-art methods for long-term motion prediction on the SoMoF benchmark as well as the CMU-Mocap and MuPoTS-3D datasets.
This study confirms the positive impact of frequency input representations, space-time separable and fully-learnable interaction adjacencies for the encoding GCN and FC decoding and contributes a novel initialization procedure for the 2-body spatial interaction parameters of the encoder, which benefits performance and stability.
A novel Trajectory-Aware Body Interaction Transformer (TBIFormer) for multi-person pose forecasting via effectively modeling body part interactions and empirically evaluates the frame-work on CMU-Mocap, MuPoTS-3D as well as synthesized datasets, demonstrating that the method greatly outperforms the state-of-the-art methods.
Adding a benchmark result helps the community track progress.