Latest YouTube Video

Tuesday, November 24, 2015

Context-Aware Bandits. (arXiv:1510.03164v2 [cs.LG] UPDATED)

In this paper, we present a simple and efficient Context-Aware Bandit (CAB) algorithm. With CAB we attempt to craft a bandit algorithm that can capture collaborative effects and that can be easily deployed in a real-world recommendation system, where the multi-armed bandits have been shown to perform well in particular with respect to the cold-start problem. CAB utilizes a context-aware clustering technique augmenting exploration-exploitation strategies. CAB dynamically clusters the users based on the content universe under consideration. We provide a theoretical analysis in the standard stochastic multi-armed bandits setting. We demonstrate the efficiency of our approach on production and real-world datasets, showing the scalability and, more importantly, the significantly increased prediction performance against several existing state-of-the-art methods.



from cs.AI updates on arXiv.org http://ift.tt/1ZwXM9W
via IFTTT

No comments: