Research Output
Fusing audio, visual and textual clues for sentiment analysis from multimodal content
  A huge number of videos are posted every day on social media platforms such as Facebook and YouTube. This makes the Internet an unlimited source of information. In the coming decades, coping with such information and mining useful knowledge from it will be an increasingly difficult task. In this paper, we propose a novel methodology for multimodal sentiment analysis, which consists in harvesting sentiments from Web videos by demonstrating a model that uses audio, visual and textual modalities as sources of information. We used both feature- and decision-level fusion methods to merge affective information extracted from multiple modalities. A thorough comparison with existing works in this area is carried out throughout the paper, which demonstrates the novelty of our approach. Preliminary comparative experiments with the YouTube dataset show that the proposed multimodal system achieves an accuracy of nearly 80%, outperforming all state-of-the-art systems by more than 20%.

  • Type:

    Article

  • Date:

    17 August 2015

  • Publication Status:

    Published

  • DOI:

    10.1016/j.neucom.2015.01.095

  • ISSN:

    0925-2312

  • Library of Congress:

    QA75 Electronic computers. Computer science

  • Dewey Decimal Classification:

    004 Data processing & computer science

  • Funders:

    Historic Funder (pre-Worktribe)

Citation

Poria, S., Cambria, E., Howard, N., Huang, G., & Hussain, A. (2016). Fusing audio, visual and textual clues for sentiment analysis from multimodal content. Neurocomputing, 174(Part A), 50-59. https://doi.org/10.1016/j.neucom.2015.01.095

Authors

Keywords

Multimodal fusion; Big social data analysis; Opinion mining; Multimodal sentiment analysis; Sentic computing

Monthly Views:

Available Documents