Understanding Affective Content of Music Videos Through Learned Representations

Abstract

In consideration of the ever-growing available multimedia data, annotating multimedia content automatically with feeling(s) expected to arise in users is a challenging problem. In order to solve this problem, the emerging research field of video affective analysis aims at exploiting human emotions. In this field where no dominant feature representation has emerged yet, choosing discriminative features for the effective representation of video segments is a key issue in designing video affective content analysis algorithms. Most existing affective content analysis methods either use low-level audio-visual features or generate hand-crafted higher level representations based on these low-level features. In this work, we propose to use deep learning methods, in particular convolutional neural networks (CNNs), in order to learn mid-level representations from automatically extracted low-level features. We exploit the audio and visual modality of videos by employing Mel-Frequency Cepstral Coefficients (MFCC) and color values in the RGB space in order to build higher level audio and visual representations. We use the learned representations for the affective classification of music video clips. We choose multi-class support vector machines (SVMs) for classifying video clips into four affective categories representing the four quadrants of the Valence-Arousal (VA) space. Results on a subset of the DEAP dataset (on 76 music video clips) show that a significant improvement is obtained when higher level representations are used instead of low-level features, for video affective content analysis.

@inproceedings{acar2014understanding,
  title={Understanding Affective Content of Music Videos through Learned Representations},
  author={Acar, Esra and Hopfgartner, Frank and Albayrak, Sahin},
  booktitle={MultiMedia Modeling},
  pages={303--314},
  year={2014},
  issn={0302-9743},
  doi={10.1007/978-3-319-04114-8_26},
  organization={Springer}
}
Authors:
Esra Acar Celik, Frank Hopfgartner, Sahin Albayrak
Category:
Conference Paper
Year:
2014
Location:
MMM'14: Proceedings of the 20th International Conference on Multimedia Modeling