Violence Detection in Hollywood Movies by the Fusion of Visual and Mid-level Audio Cues

Abstract

Detecting violent scenes in movies is an important video content understanding functionality e.g., for providing automated youth protection services. One key issue in designing algorithms for violence detection is the choice of discriminative features. In this paper, we employ mid-level audio features and compare their discriminative power against low-level audio and visual features. We fuse these mid-level audio cues with low-level visual ones at the decision level in order to further improve the performance of violence detection. We use Mel-Frequency Cepstral Coefficients (MFCC) as audio and average motion as visual features. In order to learn a violence model, we choose two-class support vector machines (SVMs). Our experimental results on detecting violent video shots in Hollywood movies show that mid-level audio features are more discriminative and provide more precise results than low-level ones. The detection performance is further enhanced by fusing the mid-level audio cues with low-level visual ones using an SVM-based decision fusion.

@inproceedings{acar2013violence,
  title={Violence detection in hollywood movies by the fusion of visual and mid-level audio cues},
  author={Acar, Esra and Hopfgartner, Frank and Albayrak, Sahin},
  booktitle={Proceedings of the 21st ACM international conference on Multimedia},
  pages={717--720},
  year={2013},
  doi={10.1145/2502081.2502187},
  isbn={978-1-4503-2404-5},
  organization={ACM}
}
Authors:
Esra Acar Celik, Frank Hopfgartner, Sahin Albayrak
Category:
Conference Paper
Year:
2013
Location:
ACM Multimedia (ACMMM)