The problem occurs when they have a self dealing extractive business model that says instead of wanting just to help you with your goal, we really just want to suck you down the rabbit hole. And there's no reason why recommendations should be on by default. This is about why are we recommending things to people that systematically tilt in the more extremeizing directions that we know are ruining society. So how do we actually regulate? I mean, why not just not have the recommendations at all except when you click a button specifically.
When we press play on a YouTube video, we set in motion an algorithm that taps all available data to find the next video that keeps us glued to the screen. Because of its advertising-based business model, YouTube’s top priority is not to help us learn to play the accordion, tie a bow tie, heal an injury, or see a new city — it’s to keep us staring at the screen for as long as possible, regardless of the content. This episode’s guest, AI expert Guillaume Chaslot, helped write YouTube’s recommendation engine and explains how those priorities spin up outrage, conspiracy theories and extremism. After leaving YouTube, Guillaume’s mission became shedding light on those hidden patterns on his website, AlgoTransparency.org, which tracks and publicizes YouTube recommendations for controversial content channels. Through his work, he encourages YouTube to take responsibility for the videos it promotes and aims to give viewers more control.