When you're trying to model the way people behave, you've got this bottom up and top down, different way of doing it. I like to try to approach it at both ends. But I do see your point that if you start at the top down of like assume rationality, sometimes that will just fully explain the thing. And then you're like, oh, okay, that's why this thing exists because even rational agents would behave this way. It may seem like a strange behavior, but it's actually makes perfect sense when you think about it.
Read the full transcript here.
How hard is it to arrive at true beliefs about the world? How can you find enjoyment in being wrong? When presenting claims that will be scrutinized by others, is it better to hedge and pad the claims in lots of caveats and uncertainty, or to strive for a tone that matches (or perhaps even exaggerates) the intensity with which you hold your beliefs? Why should you maybe focus on drilling small skills when learning a new skill set? What counts as a "simple" question? How can you tell when you actually understand something and when you don't? What is "cargo culting"? Which features of AI are likely in the future to become existential threats? What are the hardest parts of AI research? What skills will we probably really wish we had on the eve of deploying superintelligent AIs?
Buck Shlegeris is the CTO of Redwood Research, an independent AI alignment research organization. He currently leads their interpretability research. He previously worked on research and outreach at the Machine Intelligence Research Institute. His website is shlegeris.com.
Staff
Music
Affiliates