Once a week we send out a newsletter called One Helpful Idea where we distill down a single idea that we think you'll find to be valuable. Each idea is formatted to be read in just 30 seconds and at the bottom of the newsletter we also include links to that week's new podcast episodes. To sign up for the One Helpful Idea newsletter and start receiving bite sized ideas once a week visit clearerthinkingpodcast.com slash newsletter.
Read the full transcript here.
How hard is it to arrive at true beliefs about the world? How can you find enjoyment in being wrong? When presenting claims that will be scrutinized by others, is it better to hedge and pad the claims in lots of caveats and uncertainty, or to strive for a tone that matches (or perhaps even exaggerates) the intensity with which you hold your beliefs? Why should you maybe focus on drilling small skills when learning a new skill set? What counts as a "simple" question? How can you tell when you actually understand something and when you don't? What is "cargo culting"? Which features of AI are likely in the future to become existential threats? What are the hardest parts of AI research? What skills will we probably really wish we had on the eve of deploying superintelligent AIs?
Buck Shlegeris is the CTO of Redwood Research, an independent AI alignment research organization. He currently leads their interpretability research. He previously worked on research and outreach at the Machine Intelligence Research Institute. His website is shlegeris.com.
Staff
Music
Affiliates