We propose a method to generate multiple hypotheses for human 3D pose all of them consistent with the 2D detection of joints in a monocular RGB image. To generate these pose hypotheses we use a novel generative model defined in the space of anatomically plausible 3D poses satisfying the joint angle limits and limb length ratios. The proposed generative model is uniform in the space of anatomically valid poses and as a result, does not suffer from the dataset bias in existing motion capture datasets such as Human3.6M (H36M), HumanEva, and CMU MoCap. A good model that spans the full variability of human pose and generalizes to unseen poses must be compositional i.e., produce a pose by combining parts. Our model is flexible and compositional and consequently can generalize to every plausible human 3D pose since it is only limited by physical constraints. We discuss sampling from this model and use these samples to generate multiple diverse human 3D pose hypotheses given the 2D detection of joints. We argue that generating multiple pose hypotheses from a monocular RGB image is more reasonable than generating only a single 3D pose given the depth ambiguity and the uncertainty caused by occlusion and imperfect 2D joint detection. To support this argument, we have performed empirical evaluation on the popular Human3.6M dataset that confirms that most often, at least one of our pose hypotheses is closer to the true 3D pose compared to the estimated pose by other recent baseline methods for 3D pose reconstruction from monocular RGB images. The idea of generating multiple consistent and valid pose hypotheses can give rise to a new line of future work that has not previously been addressed in the literature.
from cs.AI updates on arXiv.org http://ift.tt/2k4GVzs
via IFTTT
No comments:
Post a Comment