Research Output
Maximising audiovisual correlation with automatic lip tracking and vowel based segmentation
  In recent years, the established link between the various human communication production domains has become more widely utilised in the field of speech processing. In this work, a state of the art Semi Adaptive Appearance Model (SAAM) approach developed by the authors is used for automatic lip tracking, and an adapted version of our vowel based speech segmentation system is employed to automatically segment speech. Canonical Correlation Analysis (CCA) on segmented and non segmented data in a range of noisy speech environments finds that segmented speech has a significantly better audiovisual correlation, demonstrating the feasibility of our techniques for further development as part of a proposed audiovisual speech enhancement system.

Citation

Abel, A., Hussain, A., Nguyen, Q., Ringeval, F., Chetouani, M., & Milgram, M. (2009). Maximising audiovisual correlation with automatic lip tracking and vowel based segmentation. In Biometric ID Management and Multimodal Communication. , (65-72). https://doi.org/10.1007/978-3-642-04391-8_9

Authors

Keywords

Canonical Correlation, Canonical Correlation Analysis, Noisy Environment, Speech Enhancement, Visual Speech

Monthly Views:

Available Documents