Fusion of Learned Multi-Modal Representations and Dense Trajectories for Emotional Analysis in Videos

Abstract

When designing a video affective content analysis algorithm, one of the most important steps is the selection of discriminative features for the effective representation of video segments. The majority of existing affective content analysis methods either use low-level audio-visual features or generate handcrafted higher level representations based on these lowlevel features. We propose in this work to use deep learning methods, in particular convolutional neural networks (CNNs), in order to automatically learn and extract mid-level representations from raw data. To this end, we exploit the audio and visual modality of videos by employing Mel-Frequency Cepstral Coefficients (MFCC) and color values in the HSV color space. We also incorporate dense trajectory based motion features in order to further enhance the performance of the analysis. By means of multi-class support vector machines (SVMs) and fusion mechanisms, music video clips are classified into one of four affective categories representing the four quadrants of the Valence-Arousal (VA) space. Results obtained on a subset of the DEAP dataset show (1) that higher level representations perform better than low-level features, and (2) that incorporating motion information leads to a notable performance gain, independently from the chosen representation.

@INPROCEEDINGS{acarCbmi2015, 
author={Acar, Esra and Hopfgartner, Frank and Albayrak, Sahin}, 
booktitle={International Workshop on Content-Based Multimedia Indexing (CBMI)}, 
title={Fusion of learned multi-modal representations and dense trajectories for emotional analysis in videos}, 
month={June},
year={2015}, 
pages={1-6},  
doi={10.1109/CBMI.2015.7153603}}
Autoren:
Esra Acar Celik, Frank Hopfgartner, Sahin Albayrak
Kategorie:
Tagungsbeitrag
Jahr:
2015
Ort:
Workshop on Content-Based Multimedia Indexing (CBMI)