Machine Learning Street Talk (MLST) cover image

#030 Multi-Armed Bandits and Pure-Exploration (Wouter M. Koolen)

Machine Learning Street Talk (MLST)

00:00

Intro

This chapter explores the multi-armed bandit problem and its impact on decision-making, emphasizing the balance between immediate and delayed rewards. It illustrates these concepts through a gambling analogy and examines practical applications, including their use in clinical trials.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app