Connor was the first guest of this podcast. In the last episode, we talked a lot about EleutherAI, a grassroot collective of researchers he co-founded, who open-sourced GPT-3 size models such as GPT-NeoX and GPT-J. Since then, Connor co-founded Conjecture, a company aiming to make AGI safe through scalable AI Alignment research.
One of the goals of Conjecture is to reach a fundamental understanding of the internal mechanisms of current deep learning models using interpretability techniques. In this episode, we go through the famous AI Alignment compass memes, discuss Connor’s inside views about AI progress, how he approaches AGI forecasting, his takes on Eliezer Yudkowsky’s secret strategy, common misconceptions and EleutherAI, and why you should consider working for his new company Conjecture.
youtube: https://youtu.be/Oz4G9zrlAGs
transcript: https://theinsideview.ai/connor2
twitter: https:/twitter.com/MichaelTrazzi
OUTLINE
(00:00) Highlights
(01:08) AGI Meme Review
(13:36) Current AI Progress
(25:43) Defining AG
(34:36) AGI Timelines
(55:34) Death with Dignity
(01:23:00) EleutherAI
(01:46:09) Conjecture
(02:43:58) Twitter Q&A