AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
The Probability of Catastrophe in AI
We are selected for, for kind of inclusive genetic fitness ability to reproduce. That selection effectively pulls in like a random random instantiation of capabilities and motivations that just happened to give you this particular behavior. So we have no reassurance or no reason to believe really at all that the goals that we have for AI will be represented internally.
Nathan Labenz dives in with Jaan Tallinn, a technologist, entrepreneur (Kazaa, Skype), and investor (DeepMind and more) whose unique life journey has intersected with some of the most important social and technological events of our collective lifetime. Jaan has since invested in nearly 180 startups, including dozens of AI application layer companies and some half dozen startup labs that focus on fundamental AI research, all in an effort to support the teams that he believes most likely to lead us to AI safety, and to have a seat at the table at organizations that he worries might take on too much risk. He's also founded several philanthropic nonprofits, including the Future of Life Institute, which recently published the open letter calling for a six-month pause on training new AI systems. In this discussion, we focused on:
- the current state of AI development and safety
-Jan's expectations for possible economic transformation
- what catastrophic failure modes worry him most in the near term
- How big of a bullet we dodged with the training of GPT-4
- Which organizations really matter for immediate-term pause purposes
- How AI race dynamics are likely to evolve over the next couple of years
LINKS REFERENCED IN THE EPISODE:
Future of Life's open letter: https://futureoflife.org/open-letter/pause-giant-ai-experiments/
Eliezer Yudkowsky's TIME article: https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/
Podcast: Daniela and Dario Amodei on Anthropic- https://podcasts.apple.com/ie/podcast/daniela-and-dario-amodei-on-anthropic/id1170991978?i=1000552976406
Zvi on the pause: https://thezvi.substack.com/p/on-the-fli-ai-risk-open-letter
--
We're hiring across the board at Turpentine and for Erik's personal team on other projects he's incubating. He's hiring a Chief of Staff, EA, Head of Special Projects, Investment Associate, and more. For a list of JDs, check out: eriktorenberg.com.
SPONSORS:
Shopify is the global commerce platform that helps you sell at every stage of your business. Shopify powers 10% of ALL eCommerce in the US. And Shopify's the global force behind Allbirds, Rothy's, and Brooklinen, and 1,000,000s of other entrepreneurs across 175 countries.From their all-in-one e-commerce platform, to their in-person POS system – wherever and whatever you're selling, Shopify's got you covered. With free Shopify Magic, sell more with less effort by whipping up captivating content that converts – from blog posts to product descriptions using AI. Sign up for $1/month trial period: https://shopify.com/cognitive
Thank you Omneky for sponsoring The Cognitive Revolution. Omneky is an omnichannel creative generation platform that lets you launch hundreds of thousands of ad iterations that actually work, customized across all platforms, with a click of a button. Omneky combines generative AI and real-time advertising data. Mention "Cog Rev" for 10% off.
TIMESTAMPS:
(0:00) Episode Preview
(1:30) Jaan's impressive entrepreneurial career and his role in the recent AI Open Letter
(3:26) AI safety and Future of Life Institute
(6:55) Jaan's first meeting with Eliezer Yudkowsky and the founding of the Future of Life Institute
(13:00) Future of AI evolution
(15:55) Sponsor: Omneky
(17:20) Jaan's investments in AI companies
(24:22) The emerging danger paradigm
(33:48) AI supervising itself
(40:06) Evolution, useful heuristics, and lack of insight into selection process
(43:13) Current estimate for life-ending catastrophe
(54:20) Our luck given the softness of language models
(56:24) Future of Language Models
(1:01:00) The Moore’s law of mad science
(1:03:02) GPT-5 type project
(1:11:00) AI alignment with the latest models
(1:14:31) AI research investment and safety
(1:21:00) What a six month pause buys us
(1:27:01) AI’s Turing Test Passing
(1:33:18) Responsible AI development.
(1:41:20) Neuralink implant technology
Listen to all your favourite podcasts with AI-powered features
Listen to the best highlights from the podcasts you love and dive into the full episode
Hear something you like? Tap your headphones to save it with AI-generated key takeaways
Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more
Listen to all your favourite podcasts with AI-powered features
Listen to the best highlights from the podcasts you love and dive into the full episode