Research Output
A novel context-aware multimodal framework for persian sentiment analysis
  Most recent works on sentiment analysis have exploited the text modality. However, millions of hours of video recordings posted on social media platforms everyday hold vital unstructured information that can be exploited to more effectively gauge public perception. Multimodal sentiment analysis offers an innovative solution to computationally understand and harvest sentiments from videos by contextually exploiting audio, visual and textual cues. In this paper, we, firstly, present a first of its kind Persian multimodal dataset comprising more than 800 utterances, as a benchmark resource for researchers to evaluate multimodal sentiment analysis approaches in Persian language. Secondly, we present a novel context-aware multimodal sentiment analysis framework, that simultaneously exploits acoustic, visual and textual cues to more accurately determine the expressed sentiment. We employ both decision-level (late) and feature-level (early) fusion methods to integrate affective cross-modal information. Experimental results demonstrate that the contextual integration of multimodal features such as textual, acoustic and visual features deliver better performance (91.39%) compared to unimodal features (89.24%).

  • Type:

    Article

  • Date:

    02 March 2021

  • Publication Status:

    Published

  • Publisher

    Elsevier BV

  • DOI:

    10.1016/j.neucom.2021.02.020

  • Cross Ref:

    10.1016/j.neucom.2021.02.020

  • ISSN:

    0925-2312

  • Funders:

    Edinburgh Napier Funded

Citation

Dashtipour, K., Gogate, M., Cambria, E., & Hussain, A. (2021). A novel context-aware multimodal framework for persian sentiment analysis. Neurocomputing, 457, 377-388. https://doi.org/10.1016/j.neucom.2021.02.020

Authors

Keywords

Multimodal sentiment analysis, Persian sentiment analysis

Monthly Views:

Available Documents