This groundbreaking initiative seeks to revolutionize hearing care by redefining hearing aid design, incorporating cognitive principles, multi-modal sensory input and utilizing innovative technologies.

Funder: EPSRC

COG-MHEAR is a Programme Grant funded by the EPSRC Transformative Healthcare Technologies for 2050 Call. It is being undertaken by a multidisciplinary team of experts led by Edinburgh Napier University, in collaboration with the University of Edinburgh, University of Glasgow, University of Wolverhampton, Heriot-Watt University, University of Manchester, and the University of Nottingham. The world-class research team is complemented by a strong user group comprising clinical and industrial partners and end-users. These range from leading global Hearing Aid manufacturers (Sonova), wireless research and standardisation drivers (Nokia Bell-Labs), chip design SME (Alpha Data), national Innovation Centres (Digital Health & Care Institute (DHI), The Data Lab) and Charities (Deaf Scotland (DS) and Action on Hearing Loss).

Over 12million people in the UK (approximately 1.5 billion globally) are affected by hearing loss costing the NHS around £0.5billion annually. Hearing aids (HAs) represent most widely used technologies to compensate for the majority of hearing losses. Currently, only 40% of people who could benefit from HAs have them, and most people who have HA devices don't use them often enough. COG-MHEAR aims to transform hearing care by 2050, by completely re-thinking the way HAs are currently designed. Our transformative approach - for the first time – draws on the cognitive principles of normal hearing. Listeners naturally combine information from both their ears and eyes: we use our eyes to help us hear. We are creating "multi-modal" aids which not only amplify sounds but also use information collected from a range of sensors to improve understanding of speech. For example, a large amount of information about the words said by a person is conveyed in visual information, in the movements of the speaker's lips, hand gestures, and similar. This is ignored by current HAs and could be fed into the speech enhancement process. We are also using wearable sensors (which could be embedded within future HAs) to estimate listening effort and its impact on the person, and use this to tell whether the speech enhancement process is actually helping or not.

Creating multi-modal "audio-visual" HAs raises many major technical challenges which need to be tackled holistically. Making use of lip movements traditionally requires a video camera filming the speaker, which introduces privacy questions. We are overcoming some of these questions by developing transformative privacy-preserving approaches using conventional video data. To complement this, we are also investigating more ambitious methods for remote lip reading without using a video feed, instead exploring the use of radio signals for remote monitoring. Adding in these new sensors and the processing that is required to make sense of the data produced will place a significant additional power and miniaturisation burden on the HA device. We are addressing the need to make our sophisticated visual and sound processing algorithms operate with minimum power and minimum delay, by making dedicated hardware implementations, accelerating the key processing steps. In the long term, we aim for all processing to be done in the HA itself - keeping data local to the person for privacy. In the shorter term, some processing will need to be done in the cloud (as it is too power intensive) and we are creating new very low latency (<10ms) interfaces to cloud infrastructure to avoid delays between when a word is "seen" being spoken and when it is heard. We also plan to utilise advances in flexible electronics (e-skin) and antenna design to make the overall unit as small, discreet and usable as possible.

The ambitious COG-MHEAR programme is continuing to shape the global hearing research landscape through a large number of academic publications in prestigious journals (such as Nature Communications), organisation of international Challenges and workshops at world-leading conferences, and global showcasing of a range of real-time multi-modal Hearing Assistive Technology demonstrators. More details are available here.