COG-MHEAR: Towards cognitively-inspired, 5G-IoT enabled multi-modal hearing aids
  Embracing the multimodal nature of speech presents both opportunities and challenges for hearing assistive technology:
on the one hand there are opportunities for the design of new multimodal audio-visual (AV) algorithms; on the other hand,
multimodality challenges the current standards for hearing aid evaluation, which generally considers the perception of the
audio signal in insolation.
Our hypothesis is that it is possible to contextually combine visual and acoustic inputs to produce a "real-time" cognitively inspired, multimodal hearing device that is able to significantly boost speech intelligibility in the everyday listening
environments in which traditional audio-only hearing devices prove ineffective. To test this hypothesis, and build on recent
preliminary research by the team, into off-line development of lip-reading and deep learning-driven AV speech
enhancement (SE) algorithms, we aim to ambitiously develop and clinically validate next-generation, cognitively-inspired,
AV hearing-aid (HA) prototypes, operating in real-time through 'off- and on-chip' implementations.
The disruptive device will autonomously and contextually adapt to the nature and quality of its visual and acoustic
environmental inputs. We will achieve this aim by the design of a transformative, privacy-preserving AV SE framework,
integrating a next-generation communication network solution: 5G Cloud-Radio Access Network (C-RAN), with the Internet
of Things (IoT), context-aware machine learning and strong privacy algorithms, for optimised, 'off-' and 'on-chip' real-time
processing.

  • Start Date:

    1 March 2021

  • End Date:

    28 February 2026

  • Activity Type:

    Externally Funded Research

  • Funder:

    Engineering and Physical Sciences Research Council

  • Value:

    £3258999

Project Team