Variational autoencoders were proven successful in domains suchas computer vision and speech processing. Their adoption for mod-eling user preferences is still unexplored, although recently it isstarting to gain attention in the current literature. In this work, wepropose a model which extends variational autoencoders by exploit-ing the rich information present in the past preference history. Weintroduce a recurrent version of the VAE, where instead of passinga subset of the whole history regardless of temporal dependencies,we rather pass the consumption sequence subset through a recur-rent neural network. At each time-step of the RNN, the sequence isfed through a series of fully-connected layers, the output of whichmodels the probability distribution of the most likely future prefer-ences. We show that handling temporal information is crucial forimproving the accuracy of the VAE: In fact, our model beats thecurrent state-of-the-art by valuable margins because of its abilityto capture temporal dependencies among the user-consumption se-quence using the recurrent encoder still keeping the fundamentalsof variational autoencoders intact.