The UTD-MHAD dataset consists of 27 different actions performed by 8 subjects. Each subject repeated the action for 4 times, resulting in 861 action sequences in total. The RGB, depth, skeleton and the inertial sensor signals were recorded.
Source: Skepxels: Spatio-temporal Image Representation of Human Skeleton Joints for Action Recognition Image Source: https://www.researchgate.net/figure/Sample-shots-of-the-27-actions-in-the-UTD-MHAD-database_fig12_283090976
Unknown