This is color images of Azure Kinect sensor in back position.
This dataset supports the following publication: Xing, QJ., Shen, YY., Cao, R. et al. Functional movement screen dataset collected with two Azure Kinect depth sensors. Sci Data 9, 104 (2022). https://doi.org/10.1038/s41597-022-01188-7
This presents a dataset for vision-based autonomous functional movement screen (FMS) collected from 45 human subjects of different ages (18-59 years old) executing the following movements: deep squat, hurdle step, in-line lunge, shoulder mobility, active straight raise, trunk stability push-up, and rotary stability. Specifically, shoulder mobility was performed only once by different subjects, while the other movements were repeated for three episodes each. Each episode was saved as one record and was annotated from 0 to 3 by three FMS experts. The main strength of our database is twofold. One is the multimodal data provided, including color images, depth images, and 3D human skeleton joints. The other is the multiview data collected from the two synchronized Azure Kinect sensors in front of and on the side of the subjects. Finally, three-dimensional trajectories, quaternions, and 2D pixel trajectories of 32 joints were recorded. Our dataset contains a total of 1812 recordings, with 3624 episodes. The size of the dataset is 158 GB. As a supplement, we also provide color image data from the other two cameras (back and side low positions). This dataset provides the opportunity for automatic action quality evaluation of FMS.
Funding
the National Natural Science Foundation of China under Grant No.72071018
the National Key R&D Program of China under Grant No.2018YFC2000600
History
Research Institution(s)
School of Sports Engineering, BeiJing Sport University