HuMMan v1.0: 3D Vision Subset (New!)

[Go back to Home]




HuMMan v1.0: 3D Vision Subset (HuMMan-Point) consists of 340 subjects, 247 actions and 907 sequences. Color videos, depth images, masks (computed with background), SMPL parameters, and camera parameters are provided. It is worth noting that data captured with a mobile sensor (iPhone) are also included. This subset is ideal for 3D vision researchers to study dynamic humans with commercial depth sensors.



Download links and a toolbox can be found here.

Please contact Zhongang Cai (caiz0023@e.ntu.edu.sg) for feedback.




License

HuMMan is under S-Lab License v1.0.



Citation

@inproceedings{cai2022humman,
  title={{HuMMan}: Multi-modal 4d human dataset for versatile sensing and modeling},
  author={Cai, Zhongang and Ren, Daxuan and Zeng, Ailing and Lin, Zhengyu and Yu, Tao and Wang, Wenjia and Fan,
          Xiangyu and Gao, Yang and Yu, Yifan and Pan, Liang and Hong, Fangzhou and Zhang, Mingyuan and
          Loy, Chen Change and Yang, Lei and Liu, Ziwei},
  booktitle={17th European Conference on Computer Vision, Tel Aviv, Israel, October 23--27, 2022,
             Proceedings, Part VII},
  pages={557--577},
  year={2022},
  organization={Springer}
}