Erfan Loweimi
erfan loweimi

Dr Erfan Loweimi

  

Biography

Erfan Loweimi holds a part-time EPSRC Research Fellowship at School of Computing, Engineering & The Built Environment, and serves as a full-time Research Associate at the Machine Intelligence Laboratory, University of Cambridge (2022-). Prior to his current roles, he held Research Associate positions at King’s College London (2021-2023) and at the Centre for Speech Technology Research (CSTR), University of Edinburgh (2018-2021). He earned his PhD from the University of Sheffield in 2018, where he was a Faculty Scholar in the Speech and Hearing Research Group (SPandH), Department of Computer Science.

Erfan has received recognition for his contributions, including the “Research Communicator of the Year Award (University of Sheffield, 2017)” and the “Outstanding Reviewer Award (IEEE ICASSP, 2022).” He has actively served his community in various capacities, including as an Area Chair in prestigious conferences (INTERSPEECH, ICASSP, EMNLP), Publication Chair, Organising Committee Member in multiple conferences, and as an associate member of the IEEE’s Speech and Language Technical Committee (SLTC).

Dr. Loweimi has published 38 peer-reviewed papers and is the first author of more than 28 journal and conference papers, including three in IEEE Transactions on Audio, Speech, and Language Processing. His research interests encompass end-to-end speech processing and recognition, applications of speech technology in healthcare, explainable and trustworthy AI-based speech technology, multi-modal speech processing, and multi-modal information retrieval.

Esteem

Conference Organising Activity

  • Meta Reviewer in IEEE ICASSP 2024
  • Area Chair in ISCA INTERSPEECH 2024
  • UKISpeech 2024 Co-organiser
  • Area Chair in ISCA INTERSPEECH 2023
  • Area Chair in EMNLP 2023
  • Meta Reviewer in IEEE ICASSP 2023
  • Publication Chair in IEEE Spoken Language Technology Workshop (SLT)
  • UKSpeech 2016 co-organiser

 

Fellowships and Awards

  • Outstanding Reviewer Award (IEEE ICASSP 2022)
  • Research Communicator of the Year Award (University of Sheffield, 2017)

 

Invited Speaker

  • Speaker Retrieval in the Wild, BBC Broadcasting House, London, UK, 2024
  • Speaker Retrieval in the Wild: Challenges, Effectiveness and Robustness, University of Cambridge, Cambridge, UK, 2024
  • Phonetic Error Analysis beyond Phone Error Rate, University of Edinburgh, Edinburgh, UK, 2023
  • Speech Acoustic Modelling from Raw Signal Representations, Edinburgh Napier University, Edinburgh, UK, 2022
  • Recent Advances in Interpreting and Understanding DNNs, Iranian Conference on Machine Vision and Image Processing (MVIP), Iran, 2022
  • On the Robustness and Training Dynamics of Raw Waveform Models, University of Edinburgh, Edinburgh, UK, 2021
  • Raw Sign and Magnitude Spectra for Multi-head Acoustic Modelling, University of Edinburgh, Edinburgh, UK, 2020
  • DNN Statistical Interpretation and Normalisation for ASR, University of Edinburgh, Edinburgh, UK, 2019
  • Understanding and Interpreting DNNs for Speech Recognition, Qatar Computing Research Institute (QCRI), Doha, Qatar, 2019
  • Robust Phase-based Speech Signal Processing; From Source-Filter Separation to Model-Based Robust ASR, University of Sheffield, Sheffield, UK, 2018
  • Speech Phase Spectrum: Love It or Leave It?, University of Edinburgh, Edinburgh, UK, 2018
  • Genie in the mike! The Science of Talking (with) Machines, A Pint of Science Festival, Sheffield, 2017
  • Channel Compensation in the Generalised VTS Approach to Robust ASR, UKSpeech 2017, University of Cambridge, Cambridge, UK, 2017
  • Deep Learning, The End of History and The Last Computer Scientist, A Pint of Science Festival, Sheffield, 2016
  • Signal Processing is Dead(?)! Long Live DNN!, Machine Intelligence for Natural Interfaces (MINI) workshop, Sheffield, 2016
  • Source-filter Separation of Speech Signal in the Phase Domain, UKSpeech 2015, University of East Anglia, Norwich, UK, 2015

 

Membership of Professional Body

  • International Speech Communication Association (ISCA)
  • IEEE Signal Processing Society

 

Public/Community Engagement

  • Genie in the mike! The Science of Talking (with) Machines, A Pint of Science Festival, Sheffield, UK, 2017
  • Deep Learning, The End of History and The Last Computer Scientist, A Pint of Science Festival, Sheffield, UK, 2016

 

Reviewing

  • IEEE/ACM Transactions on Audio, Speech, and Language Processing
  • IEEE Automatic Speech and Understanding Workshop (ASRU)
  • Computer Speech & Language
  • International Conference on Affective Computing & Intelligent Interaction (ACII)
  • ISCA INTERSPEECH
  • IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP)

 

Visiting Positions

  • Visitor in King's College London (KCL)
  • Visitor in Centre for Speech Technology Research (CSTR), University of Edinburgh

 

Date


29 results

Phonetic Error Analysis Beyond Phone Error Rate

Journal Article
Loweimi, E., Carmantini, A., Bell, P., Renals, S., & Cvetkovic, Z. (2023)
Phonetic Error Analysis Beyond Phone Error Rate. IEEE/ACM Transactions on Audio, Speech and Language Processing, 31, 3346-3361. https://doi.org/10.1109/taslp.2023.3313417
In this article, we analyse the performance of the TIMIT-based phone recognition systems beyond the overall phone error rate (PER) metric. We consider three broad phonetic cla...

Dysarthric Speech Recognition, Detection and Classification using Raw Phase and Magnitude Spectra

Conference Proceeding
Yue, Z., Loweimi, E., & Cvetkovic, Z. (2023)
Dysarthric Speech Recognition, Detection and Classification using Raw Phase and Magnitude Spectra. In Proc. INTERSPEECH 2023 (1533-1537). https://doi.org/10.21437/interspeech.2023-222
In this paper, we explore the effectiveness of deploying the raw phase and magnitude spectra for dysarthric speech recognition, detection and classification. In particular, we...

Multi-Stream Acoustic Modelling Using Raw Real and Imaginary Parts of the Fourier Transform

Journal Article
Loweimi, E., Yue, Z., Bell, P., Renals, S., & Cvetkovic, Z. (2023)
Multi-Stream Acoustic Modelling Using Raw Real and Imaginary Parts of the Fourier Transform. IEEE/ACM Transactions on Audio, Speech and Language Processing, 31, 876-890. https://doi.org/10.1109/taslp.2023.3237167
In this paper, we investigate multi-stream acoustic modelling using the raw real and imaginary parts of the Fourier transform of speech signals. Using the raw magnitude spectr...

Acoustic Modelling From Raw Source and Filter Components for Dysarthric Speech Recognition

Journal Article
Yue, Z., Loweimi, E., Christensen, H., Barker, J., & Cvetkovic, Z. (2022)
Acoustic Modelling From Raw Source and Filter Components for Dysarthric Speech Recognition. IEEE/ACM Transactions on Audio, Speech and Language Processing, 30, 2968-2980. https://doi.org/10.1109/taslp.2022.3205766
Acoustic modelling for automatic dysarthric speech recognition (ADSR) is a challenging task. Data deficiency is a major problem and substantial differences between typical and...

Dysarthric Speech Recognition From Raw Waveform with Parametric CNNs

Presentation / Conference
Yue, Z., Loweimi, E., Christensen, H., Barker, J., & Cvetkovic, Z. (2022, September)
Dysarthric Speech Recognition From Raw Waveform with Parametric CNNs. Paper presented at Interspeech 2022, Incheon, Korea
Raw waveform acoustic modelling has recently received increasing attention. Compared with the task-blind hand-crafted features which may discard useful information, representa...

RCT: Random consistency training for semi-supervised sound event detection

Presentation / Conference
Shao, N., Loweimi, E., & Li, X. (2022, September)
RCT: Random consistency training for semi-supervised sound event detection. Paper presented at Interspeech 2022, Incheon, Korea
Sound event detection (SED), as a core module of acoustic environmental analysis, suffers from the problem of data deficiency. The integration of semi-supervised learning (SSL...

Multi-Modal Acoustic-Articulatory Feature Fusion For Dysarthric Speech Recognition

Conference Proceeding
Yue, Z., Loweimi, E., Cvetkovic, Z., Christensen, H., & Barker, J. (2022)
Multi-Modal Acoustic-Articulatory Feature Fusion For Dysarthric Speech Recognition. In ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). https://doi.org/10.1109/icassp43922.2022.9746855
Building automatic speech recognition (ASR) systems for speakers with dysarthria is a very challenging task. Although multi-modal ASR has received increasing attention recentl...

Raw Source and Filter Modelling for Dysarthric Speech Recognition

Conference Proceeding
Yue, Z., Loweimi, E., & Cvetkovic, Z. (2022)
Raw Source and Filter Modelling for Dysarthric Speech Recognition. In ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). https://doi.org/10.1109/icassp43922.2022.9746553
Acoustic modelling for automatic dysarthric speech recognition (ADSR) is a challenging task. Data deficiency is a major problem and substantial differences between the typical...

Speech Acoustic Modelling Using Raw Source and Filter Components

Conference Proceeding
Loweimi, E., Cvetkovic, Z., Bell, P., & Renals, S. (2021)
Speech Acoustic Modelling Using Raw Source and Filter Components. In Proc. Interspeech 2021 (276-280). https://doi.org/10.21437/interspeech.2021-53
Source-filter modelling is among the fundamental techniques in speech processing with a wide range of applications. In acoustic modelling, features such as MFCC and PLP which ...

Stochastic Attention Head Removal: A Simple and Effective Method for Improving Transformer Based ASR Models

Conference Proceeding
Zhang, S., Loweimi, E., Bell, P., & Renals, S. (2021)
Stochastic Attention Head Removal: A Simple and Effective Method for Improving Transformer Based ASR Models. In Proc. Interspeech 2021 (2541-2545). https://doi.org/10.21437/interspeech.2021-280
Recently, Transformer based models have shown competitive automatic speech recognition (ASR) performance. One key factor in the success of these models is the multi-head atten...