Comparative Evaluation of Recommendation Systems for Digital Media

Abstract

TV operators and content providers use recommender systems to connect consumers directly with content that fits their needs, their different devices, and the context in which the content is being consumed. Choosing the right recommender algorithms is critical, and becomes more difficult as content offerings continue to radically expand. Because different algorithms respond differently depending on the use case, including the content and the consumer base, theoretical estimates of performance are not sufficient. Rather, evaluation must be carried out in a realistic environment. The Reference Framework described here is an evaluation platform that enables TV operators to impartially compare not just the qualitative aspects of recommendation algorithms, but also non-functional requirements of complete recommendation solutions. The Reference Framework is being created by the CrowdRec project including the most innovative recommendation system vendors and university researchers in the specific field of recommendations system and their evaluation. It provides batch-based evaluation modes, but looks forward to also support stream-based modes in the future. It is able to encapsulate open source recommenders and evaluation frameworks, making it suitable for a wide scope of evaluation needs.

@inproceedings{TikkEtAl:RealTimeNewsRecommendationUsingContextAwareEnsembles,
author = {Domonkos Tikk and Roberto Turrin and Martha Larson and David Zibriczky and Davide Malagoli and Alan Said and Andreas Lommatzsch and Sandor Szekely},
title = {Comparative Evaluation of Recommendation Systems for Digital Media},
booktitle = {Proceedings of the IBC conference 2014},
year = {2014},
location = {Amsterdam, Netherlands},
numpages = {8}
}
Authors:
Domonkos Tikk, Roberto Turrin, Martha Larson, David Zibriczky, Davide Malagoli, Alan Said, Andreas Lommatzsch, Sandor Szeely
Category:
Conference Paper
Year:
2014