Research Output
Convolutional MKL based multimodal emotion recognition and sentiment analysis
  Technology has enabled anyone with an Internet connection to easily create and share their ideas, opinions and content with millions of other people around the world. Much of the content being posted and consumed online is multimodal. With billions of phones, tablets and PCs shipping today with built-in cameras and a host of new video-equipped wearables like Google Glass on the horizon, the amount of video on the Internet will only continue to increase. It has become increasingly difficult for researchers to keep up with this deluge of multimodal content, let alone organize or make sense of it. Mining useful knowledge from video is a critical need that will grow exponentially, in pace with the global growth of content. This is particularly important in sentiment analysis, as both service and product reviews are gradually shifting from unimodal to multimodal. We present a novel method to extract features from visual and textual modalities using deep convolutional neural networks. By feeding such features to a multiple kernel learning classifier, we significantly outperform the state of the art of multimodal emotion recognition and sentiment analysis on different datasets.

  • Date:

    02 February 2017

  • Publication Status:

    Published

  • DOI:

    10.1109/ICDM.2016.0055

  • Funders:

    Historic Funder (pre-Worktribe)

Citation

Poria, S., Chaturvedi, I., Cambria, E., & Hussain, A. (2017). Convolutional MKL based multimodal emotion recognition and sentiment analysis. https://doi.org/10.1109/ICDM.2016.0055

Authors

Keywords

Multimodal sentiment analysis, Deep learning, Convolutional neural networks, Multiple kernel learning

Monthly Views:

Available Documents