HuMMan v1.0: Motion Generation Subset (New!)

[Go back to Home]




HuMMan v1.0: Motion Generation Subset (HuMMan-MoGen) consists of 160 actions (320 after mirrored), 179 subjects, 6264 motion sequences and 112112 fine-grained text descriptions. This dataset is designed to facilitate a large-scale study on the fine-grained motion generation task. It features temporal (by stage) and spatial (by part) text annotation of each SMPL motion sequence. Specifically, each motion sequence is divided into multiple standard action phases. For each phase, it is not only annotated with an overall description, but seven more detailed annotations to describe the head, torso, left arm, right arm, left leg, right leg, and trajectory of the pelvis joint.



Download links and a toolbox can be found here.

Please contact Zhongang Cai (caiz0023@e.ntu.edu.sg) for feedback.




License

HuMMan is under S-Lab License v1.0.



Citation

@inproceedings{cai2022humman,
  title={{HuMMan}: Multi-modal 4d human dataset for versatile sensing and modeling},
  author={Cai, Zhongang and Ren, Daxuan and Zeng, Ailing and Lin, Zhengyu and Yu, Tao and Wang, Wenjia and Fan,
          Xiangyu and Gao, Yang and Yu, Yifan and Pan, Liang and Hong, Fangzhou and Zhang, Mingyuan and
          Loy, Chen Change and Yang, Lei and Liu, Ziwei},
  booktitle={17th European Conference on Computer Vision, Tel Aviv, Israel, October 23--27, 2022,
             Proceedings, Part VII},
  pages={557--577},
  year={2022},
  organization={Springer}
}
@article{zhang2023finemogen,
  title   =   {FineMoGen: Fine-Grained Spatio-Temporal Motion Generation and Editing}, 
  author  =   {Zhang, Mingyuan and Li, Huirong and Cai, Zhongang and Ren, Jiawei and Yang, Lei and Liu, Ziwei},
  year    =   {2023},
  journal =   {NeurIPS},
}