undefined

Olivier Jeunen

Postdoctoral scientist at Amazon specializing in Bandits, Reinforcement Learning, and Causal Inference for Recommender Systems.

Best podcasts with Olivier Jeunen

Ranked by the Snipd community
undefined
11 snips
Jan 3, 2022 • 1h 13min

#3: Bandits and Simulators for Recommenders with Olivier Jeunen

In episode three I am joined by Olivier Jeunen, who is a postdoctoral scientist at Amazon. Olivier obtained his PhD from University of Antwerp with his work "Offline Approaches to Recommendation with Online Success". His work concentrates on Bandits, Reinforcement Learning and Causal Inference for Recommender Systems.We talk about methods for evaluating online performance of recommender systems in an offline fashion and based on rich logging data. These methods stem from fields like bandit theory and reinforcement learning. They heavily rely on simulators whose benefits, requirements and limitations we discuss in greater detail. We further discuss the differences between organic and bandit feedback as well as what sets recommenders apart from advertising. We also talk about the right target for optimization and receive some advice to continue livelong learning as a researcher, be it in academia or industry.Olivier has published multiple papers at RecSys, NeurIPS, WSDM, UMAP, and WWW. He also won the RecoGym challenge with his team from University of Antwerp. With research internships at Criteo, Facebook and Spotify Research he brings significant experience to the table. Enjoy this enriching episode of RECSPERTS - Recommender Systems Experts.Links from this Episode:Olivier's WebsiteOlivier Jeunen on LinkedIn and TwitterSimulators:RecoGymRecSimRecSimNGOpen Bandit PipelineBlogpost: Lessons Learned from Winning the RecoGym ChallengeRecSys 2020 REVEAL Workshop on Bandit and Reinforcement Learning from User InteractionsRecSys 2021 Tutorial on Counterfactual Learning and Evaluation for Recommender SystemsNeurIPS 2021 Workshop on Causal Inference and Machine LearningThesis and Papers:Dissertation: Offline Approaches to Recommendation with Online SuccessChen et al. (2018): Top-K Off-Policy Correction for a REINFORCE Recommender SystemJeunen et al. (2021): Disentangling Causal Effects from Sets of Interventions in the Presence of Unobserved ConfoundersJeunen et al. (2021): Top-𝐾 Contextual Bandits with Equity of ExposureGeneral Links:Follow me on Twitter: https://twitter.com/LivesInAnalogiaSend me your comments, questions and suggestions to marcel@recsperts.comPodcast Website: https://www.recsperts.com/