Research Output
Recycling a Landmark Dataset for Real-time Facial Capture and Animation with Low Cost HMD Integrated Cameras
  Preparing datasets for use in the training of real-time face tracking algorithms for HMDs is costly. Manually annotated facial landmarks are accessible for regular photography datasets, but introspectively mounted cameras for VR face tracking have incompatible requirements with these existing datasets. Such requirements include operating ergonomically at close range with wide angle lenses, low-latency short exposures, and near infrared sensors. In order to train a suitable face solver without the costs of producing new training data, we automatically repurpose an existing landmark dataset to these specialist HMD camera intrinsics with a radial warp reprojection. Our method separates training into local regions of the source photos, \ie mouth and eyes for more accurate local correspondence to the mounted camera locations underneath and inside the fully functioning HMD. We combine per-camera solved landmarks to yield a live animated avatar driven from the user's face expressions. Critical robustness is achieved with measures for mouth region segmentation, blink detection and pupil tracking. We quantify results against the unprocessed training dataset and provide empirical comparisons with commercial face trackers.

  • Date:

    15 November 2019

  • Publication Status:

    Published

  • DOI:

    10.1145/3359997.3365690

  • Funders:

    The Walt Disney Company Ltd

Citation

Dos Santos Brito, C. J., & Mitchell, K. (2019). Recycling a Landmark Dataset for Real-time Facial Capture and Animation with Low Cost HMD Integrated Cameras. In VRCAI '19: The 17th International Conference on Virtual-Reality Continuum and its Applications in Industryhttps://doi.org/10.1145/3359997.3365690

Authors

Keywords

real-time, facial capture, virtual reality, HMD, data preparation

Monthly Views:

Available Documents