I like that approach because I think that there's often this form of writing people do where they're not stating anything sort of concretely enough or strongly enough for you to really disagree. So when I'm trying to figure out what I believe about things, a lot of the time my approach is to make up some extremely oversimplified frame. Whenever anyone said something relevant to this question, I would first think of it in that frame with the hope of basically like, I have this like shiny new frame and the frame is probably not 100% correct. It seems really healthy to start out by trying to interpret everything through it and see how often it needs to make concessions.
Read the full transcript here.
How hard is it to arrive at true beliefs about the world? How can you find enjoyment in being wrong? When presenting claims that will be scrutinized by others, is it better to hedge and pad the claims in lots of caveats and uncertainty, or to strive for a tone that matches (or perhaps even exaggerates) the intensity with which you hold your beliefs? Why should you maybe focus on drilling small skills when learning a new skill set? What counts as a "simple" question? How can you tell when you actually understand something and when you don't? What is "cargo culting"? Which features of AI are likely in the future to become existential threats? What are the hardest parts of AI research? What skills will we probably really wish we had on the eve of deploying superintelligent AIs?
Buck Shlegeris is the CTO of Redwood Research, an independent AI alignment research organization. He currently leads their interpretability research. He previously worked on research and outreach at the Machine Intelligence Research Institute. His website is shlegeris.com.
Staff
Music
Affiliates