I want to discuss AI with you and especially issues around aligning and artificial intelligence to make it operate on behalf of humanity instead of against interest of humanity. I know you've been you recently found an organization is that right? Yeah I wasn't the person who put the initial impetus into it but I'm currently helping to run it. So at Redwood research we're trying to answer applied machine learning questions where we're actually training models. And our basic approach here is we want to take problems that we're worried about eventually causing grave risks for humanity, like technical difficulties that will cause existential risk later.
Read the full transcript here.
How hard is it to arrive at true beliefs about the world? How can you find enjoyment in being wrong? When presenting claims that will be scrutinized by others, is it better to hedge and pad the claims in lots of caveats and uncertainty, or to strive for a tone that matches (or perhaps even exaggerates) the intensity with which you hold your beliefs? Why should you maybe focus on drilling small skills when learning a new skill set? What counts as a "simple" question? How can you tell when you actually understand something and when you don't? What is "cargo culting"? Which features of AI are likely in the future to become existential threats? What are the hardest parts of AI research? What skills will we probably really wish we had on the eve of deploying superintelligent AIs?
Buck Shlegeris is the CTO of Redwood Research, an independent AI alignment research organization. He currently leads their interpretability research. He previously worked on research and outreach at the Machine Intelligence Research Institute. His website is shlegeris.com.
Staff
Music
Affiliates