Bridging the MoCap-to-Visual domain gap in human mesh recovery from 2D keypoints

Download
2023-7-31
UĞUZ, BEDİRHAN
This study tackles the problem of recovering human meshes from 2D keypoint keypoints. We develop a model that utilizes MoCap datasets (Motion Capture domain) during training without the need for corresponding RGB images. This approach allows us to leverage existing large MoCap datasets and overcome the scarcity of paired RGB-3D datasets, which are difficult to collect. After training, we switch domain and apply our models to real visual data (visual domain) by using an off-the-shelf 2D detector to obtain 2D keypoints as inputs to our model. To minimize the impact of the domain shift from the MoCap domain to the visual domain, we introduce an adversarial domain adaptation framework and adapt our models to the visual domain. Our contributions are as follows: (i) we introduce a direct regression method, i.e. without any iterations or recurrent connections, to recover 3D human body mesh from 2D keypoint detections, (ii) we bridge the domain gap between the MoCap domain and the visual domain without relying on labeled visual data, (iii) our experimental results on widely used datasets, H3.6M and 3DPW, demonstrate that our approach outperforms existing methods in terms of PA-MPJPE on both datasets, as well as MPJPE and PVE on 3DPW for the task of recovering human meshes from 2D keypoints under unpaired 3D training conditions and (iv) due to the single-stage approach of our method, it is up to 33x faster than its closest competitor (LGD).
Citation Formats
B. UĞUZ, “Bridging the MoCap-to-Visual domain gap in human mesh recovery from 2D keypoints,” M.S. - Master of Science, Middle East Technical University, 2023.