Most practical recommender systems focus on estimating immediate user engagement without considering the long-term effects of recommendations on user behaviour. Reinforcement learning (RL) methods offer the potential to optimize recommendations for long-term user engagement. However, since users are often presented with slates of multiple items—which may have interacting effects on user choice—methods are required to deal with the combinatorics of the RL action space.
Google’s SlateQ algorithm addresses this challenge by decomposing the long-term value (LTV) of a slate into a tractable function of its component item-wise LTVs. In this repo, we compare the efficiency of SlateQ to other RL methods like Q-learning that don’t decompose the LTV of a slate into its component-wise LTVs.
Empirically, we've shown that the SlateQ algorithm outperforms traditional Q-learning approaches across multiple metrics in our simulated environment.
Here, we explore the interest evolution environment from RecSim
(GitHub repo) library to train RL agents.
- Problem Formulation Document
- Exploratory Notebook on the interest evolution environment
- Notebook comparing RL techniques
- Presentation
Collin Prather and Shishir Kumar are Master students in Data Science at the University of San Francisco.
Thanks to Prof Brian Spiering for introducing us to this wonderful world of RL.
As governed by the recsim
library, this repo uses Python 3.6.