TitleConditional models for 3D human pose estimation
NameKanaujia, Atul (author), Metaxas, Dimitris (chair), Pavlovic, Vladimir (internal member), Elgammal, Ahmed (internal member), Kambhamettu, Chandra (outside member), Rutgers University, Graduate School - New Brunswick,
DescriptionHuman 3d pose estimation from monocular sequence is a challenging problem, owing to highly articulated structure of human body, varied anthropometry, self occlusion, depth ambiguities and large variability in the appearance and background in which humans may appear. Conventional vision based approaches to human 3d pose estimation mostly employed "top-down methods", which used a complete 3d human model, in a hypothesized pose, to explain the configuration of the humans in the observed 2d image. In this thesis, we work with "bottom-up methods" for human pose estimation, that use low level image features to directly predict 3d pose. The research draws on recent innovations in statistical learning, observation-driven modeling, stable image encodings, semi-supervised learning and learning perceptual representations. We address the problems of (a) modeling pose ambiguities due to 3d-to-2d projection and self occlusion, (b) lack of sufficient labeled data for training discriminative models and (c) high dimensionality of human 3d pose state space. In order to resolve 3d pose ambiguities, we use multi-valued functions to predict multiple plausible 3d poses for an image observation. We incorporate unlabeled data in a semi-supervised learning framework to constrain and improve the training of discriminative models. We also propose generic probabilistic Spectral Latent Variable Models to efficiently learn low dimensional representations of high dimensional observation data and apply it to the problem of human 3d pose inference.
NoteIncludes bibliographical references (p. 182-193)
Noteby Atul Kanaujia
CollectionGraduate School - New Brunswick Electronic Theses and Dissertations
RightsThe author owns the copyright to this work.