Research Output
Cognitively inspired speech processing for multimodal hearing technology
  In recent years, the link between the various human communication production domains has become more widely utilised in the field of speech processing. Work by the authors and others has demonstrated that intelligently integrated audio and visual information can be used for speech enhancement. This advance in technology means that the use of visual information as part of hearing aids or assistive listening devices is becoming ever more viable. One issue that is not commonly explored is how a multimodal system copes with variations in data quality and availability, such as a speaker covering their face while talking, or the existence of multiple speakers in a conversational scenario, an issue that a hearing device would be expected to cope with by switching between different programmes and settings to adapt to changes in the environment. We present the ChallengAV audiovisual corpus, which is used to evaluate a novel fuzzy logic based audiovisual switching system, designed to be used as part of a next-generation adaptive, autonomous, context aware hearing system. Initial results show that the detectors are capable of determining environmental conditions and responding appropriately, demonstrating the potential of such an adaptive multimodal system as part of a state of the art hearing aid device.

Citation

Abel, A., Hussain, A., & Luo, B. (2015). Cognitively inspired speech processing for multimodal hearing technology. In 2014 IEEE Symposium on Computational Intelligence in Healthcare and e-health (CICARE), (56-63). https://doi.org/10.1109/CICARE.2014.7007834

Authors

Keywords

Visualization, Speech, Input variables, Detectors, Fuzzy logic, Noise, Speech enhancement

Monthly Views:

Available Documents