Workshop on Adaptive Retrieval and Recommender Systems

In recent years, immense progress has been made in the development of recommendation, retrieval and personalization techniques. The evaluation of these systems is still based on traditional metrics, e.g. precision, recall and/or RMSE, often not taking the use-case and situation of the system into consideration. However, the rapid evolution of novel IR and recommender systems foster the need for new evaluation paradigms.
This workshop serves as a venue for work on novel, personalization-centric benchmarking approaches to evaluate adaptive retrieval and recommender systems.
New evaluation approaches of such systems should assess both functional and non-functional requirements. Functional requirements go beyond traditional relevance metrics and focus on user-centered utility metrics, such as novelty, diversity and serendipity. Non-functional requirements focus on performance and technical aspects, e.g. scalability and reactivity.
The aim of this workshop is to foster research on the evaluation of adaptive retrieval and recommender systems, thus joining benchmarking efforts from the information retrieval and recommender systems communities.
More Information here.