Research Output
Deep and sparse learning in speech and language processing: An overview
  Large-scale deep neural models, e.g., deep neural networks (DNN) and recurrent neural networks (RNN), have demonstrated significant success in solving various challenging tasks of speech and language processing (SLP), including speech recognition, speech synthesis, document classification and question answering. This growing impact corroborates the neurobiological evidence concerning the presence of layer-wise deep processing in the human brain. On the other hand, sparse coding representation has also gained similar success in SLP, particularly in signal processing, demonstrating sparsity as another important neurobiological characteristic. Recently, research in these two directions is leading to increasing cross-fertlisation of ideas, thus a unified Sparse Deep or Deep Sparse learning framework warrants much attention. This paper aims to provide an overview of growing interest in this unified framework, and also outlines future research possibilities in this multi-disciplinary area.

  • Date:

    13 November 2016

  • Publication Status:

    Published

  • DOI:

    10.1007/978-3-319-49685-6_16

  • Funders:

    Engineering and Physical Sciences Research Council

Citation

Wang, D., Zhou, Q., & Hussain, A. (2016). Deep and sparse learning in speech and language processing: An overview. In Advances in Brain Inspired Cognitive Systems. , (171-183). https://doi.org/10.1007/978-3-319-49685-6_16

Authors

Keywords

Deep learning; Sparse coding; Speech processing; Language processing

Monthly Views:

Available Documents