Research Output
Towards real-time privacy-preserving audio-visual speech enhancement
  Human auditory cortex in everyday noisy situations is known to exploit aural and visual cues that are contextually combined by the brain’s multi-level integration strategies to selectively suppress the background noise and focus on the target speaker. The multimodal nature of speech is well established, with listeners known to unconsciously lip read to improve the intelligibility of speech in noise. However, despite significant research in the area of audio-visual (AV) speech enhancement real-time processing models, with low latency remains a formidable technical challenge. In this paper, we propose a novel audio-visual speech enhancement model based on Temporal Convolutional Networks (TCN) that exploit the privacy preserving lip-landmark flow features for speech enhancement in multitalker cocktail party environments. In addition, we propose an efficient implementation of TCN, called Fast-TCN, to enable real time deployment of the proposed framework. The comparative simulation results in terms of speech quality and intelligibility demonstrate the effectiveness of our proposed AV model as compared to benchmark audio-only and audio-visual approaches for speaker and noise independent scenarios.

  • Type:

    Conference Paper (unpublished)

  • Date:

    23 September 2022

  • Publication Status:

    Unpublished

  • Publisher

    ISCA

  • DOI:

    10.21437/spsc.2022-2

  • Cross Ref:

    10.21437/spsc.2022-2

  • Funders:

    EPSRC Engineering and Physical Sciences Research Council

Citation

Gogate, M., Dashtipour, K., & Hussain, A. (2022, September). Towards real-time privacy-preserving audio-visual speech enhancement. Paper presented at 2nd Symposium on Security and Privacy in Speech Communication, Incheon, Korea

Authors

Keywords

speech enhancement, audio-visual speech separation, privacy-preserving

Monthly Views:

Available Documents