conspiracy theories are very effective at grabbing our attention and keeping us around. They become kind of like black holes that if the system is just recommending the stuff that people click on, one of the techniques is going to find is recommend conspiracy videos. So it's not only like don't trust the media, but it's with any moral. The algorithm by design will be anti moral. If you have a moral in the society, say like racism is bad, humans are equal on people think no, that racism is good. But what I have a problem is that it's structurally systematically anti moral.
When we press play on a YouTube video, we set in motion an algorithm that taps all available data to find the next video that keeps us glued to the screen. Because of its advertising-based business model, YouTube’s top priority is not to help us learn to play the accordion, tie a bow tie, heal an injury, or see a new city — it’s to keep us staring at the screen for as long as possible, regardless of the content. This episode’s guest, AI expert Guillaume Chaslot, helped write YouTube’s recommendation engine and explains how those priorities spin up outrage, conspiracy theories and extremism. After leaving YouTube, Guillaume’s mission became shedding light on those hidden patterns on his website, AlgoTransparency.org, which tracks and publicizes YouTube recommendations for controversial content channels. Through his work, he encourages YouTube to take responsibility for the videos it promotes and aims to give viewers more control.