3D human pose estimation
MetadataShow full item record
The objective of the proposed work is to understand how using synthetic datasets and automatic annotation policies can further state of the art research for 3D Human Pose Estimation. Given an input depth image, algorithms estimating 3D human pose can be grouped into two major categories. Some directly regress 3D joints from depth images (or corresponding point clouds / voxels), often referred to as Single-Stage approaches. Others resort to Two-Stage solutions, where they first segment these depth images with dense labels and then use corresponding (segmented) point clouds to regress joint coordinates in 3D. Contribution of work proposed in this thesis would be three-fold. First, we demonstrate that present Two-Stage approaches can be improved using automated labeling techniques on real as well as synthetic datasets individually. Second, a novel Single-Stage algorithm is designed, which inputs human point cloud and outputs the pose in 3D world coordinates.Finally, this work studies the impact of fusing synthetic datasets with real datasets while training.