Personalized Learning to Rank Applied to Search Engines
Abstract
Learning to rank is a successful attempt of bringing machine learning and information retrieval together. With learning to rank it is possible to learn ranking functions based on user preferences. User preferences for instance depend on user background, features of results, relations to other entities and the occurrence of the searched entities in presented abstracts. The reasons why there are only some applications utilizing learning to rank for personalization can be found in the extended query response time and general additional resource needs. These resource needs come from the use of machine learning and the need to learn and use trained user models. Experiments on standard benchmark data help showing that learning to rank approaches perform well, but currently it is not possible to show how much feedback is needed for an improvement or if personalization is possible. Hence the minimal number of training data for creating a ranking function is not known. We show that keeping the training data as small as possible minimizes the resource needs and even enables the possibility of training personalized ranking functions. In this work we apply learning to rank to an existing search engine and evaluate the conditions and effects of learning personal preferences. We evaluate how much implicit feedback is needed and use this knowledge to reduce the computational requirements, enabling learning to rank based personalization.