Research Output
Sentiment Analysis Meets Explainable Artificial Intelligence: A Survey on Explainable Sentiment Analysis
  Sentiment analysis can be used to derive knowledge that is connected to emotions and opinions from textual data generated by people. As computer power has grown, and the availability of benchmark datasets has increased, deep learning models based on deep neural networks have emerged as the dominant approach for sentiment analysis. While these models offer significant advantages, their lack of interpretability poses a major challenge in comprehending the rationale behind their reasoning and prediction processes, leading to complications in the models' explainability. Further, only limited research has been carried out into developing deep learning models that describe their internal functionality and behaviors. In this timely study, we carry out a first of its kind overview of key sentiment analysis techniques and eXplainable artificial intelligence (XAI) methodologies that are currently in use. Furthermore, we provide a comprehensive review of sentiment analysis explainability.

  • Type:

    Article

  • Date:

    17 July 2023

  • Publication Status:

    In Press

  • Publisher

    Institute of Electrical and Electronics Engineers (IEEE)

  • DOI:

    10.1109/taffc.2023.3296373

  • Funders:

    Edinburgh Napier Funded

Citation

Diwali, A., Saeedi, K., Dashtipour, K., Gogate, M., Cambria, E., & Hussain, A. (in press). Sentiment Analysis Meets Explainable Artificial Intelligence: A Survey on Explainable Sentiment Analysis. IEEE Transactions on Affective Computing, https://doi.org/10.1109/taffc.2023.3296373

Authors

Keywords

Sentiment analysis , Deep Learning , Explainability , Interpretability

Monthly Views:

Available Documents