Research Output
Synthetic Prior Design for Real-Time Face Tracking
  Real-time facial performance capture has recently been gaining popularity in virtual film production, driven by advances in machine learning, which allows for fast inference of facial geometry from video streams. These learning-based approaches are significantly influenced by the quality and amount of labelled training data. Tedious construction of training sets from real imagery can be replaced by rendering a facial animation rig under on-set conditions expected at runtime. We learn a synthetic actor-specific prior by adapting a state-of-the-art facial tracking method. Synthetic training significantly reduces the capture and annotation burden and in theory allows generation of an arbitrary amount of data. But practical realities such as training time and compute resources still limit the size of any training set. We construct better and smaller training sets by investigating which facial image appearances are crucial for tracking accuracy, covering the dimensions of expression, viewpoint and illumination. A reduction of training data in 1-2 orders of magnitude is demonstrated whilst tracking accuracy is retained for challenging on-set footage.

  • Date:

    19 December 2016

  • Publication Status:

    Published

  • Publisher

    IEEE

  • DOI:

    10.1109/3dv.2016.72

  • Library of Congress:

    QA75 Electronic computers. Computer science

  • Dewey Decimal Classification:

    621.36 Optical engineering including lasers & fibre optics

  • Funders:

    Innovate UK

Citation

McDonagh, S., Klaudiny, M., Bradley, D., Beeler, T., Matthews, I., & Mitchell, K. (2016). Synthetic Prior Design for Real-Time Face Tracking. In 2016 Fourth International Conference on 3D Vision (3DV),. https://doi.org/10.1109/3dv.2016.72

Authors

Keywords

real-time tracking, face performance capture,

Monthly Views:

Available Documents