AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
The Alignment Problem and the Future of Human Civilization
Dair is currently leading the ai objectives institute. A non-profit research lab focusing on the question of alignment. Dair: "The doom scenarios that i am most concerned about do not look like nanobots crawling all over us"
In this episode, we delve into the frontier of AI and the challenges surrounding AI alignment. The AI / Crypto overlap at Zuzalu sparked discussions on topics like ZKML, MEV bots, and the integration of AI agents into the Ethereum landscape.
However, the focal point was the alignment conversation, which showcased both pessimistic and resigned optimistic perspectives. We hear from Nate Sores of MIRI, who offers a downstream view on AI risk, and Deger Turan, who emphasizes the importance of human alignment as a prerequisite for aligning AI. Their discussions touch on epistemology, individual preferences, and the potential of AI to assist in personal and societal growth.
------ đ Join Ryan & David at Permissionless in September. Bankless Citizens get 30% off. đ https://bankless.cc/GoToPermissionless
------ BANKLESS SPONSOR TOOLS:
đKRAKEN | MOST-TRUSTED CRYPTO EXCHANGE â https://k.xyz/bankless-pod-q2â
đŚMETAMASK PORTFOLIO | TRACK & MANAGE YOUR WEB3 EVERYTHING â https://bankless.cc/MetaMask
âď¸ ARBITRUM | SCALING ETHEREUM â https://bankless.cc/Arbitrumâ
đMANTLE | MODULAR LAYER 2 NETWORK â https://bankless.cc/Mantleâ
đžPOLYGON | VALUE LAYER OF THE INTERNET https://polygon.technology/roadmap
------
Timestamps
0:00 Intro 1:50 Guests
5:30 NATE SOARES 7:25 MIRI 13:30 Human Coordination 17:00 Dangers of Superintelligence 21:00 AIâs Big Moment 24:45 Chances of Doom 35:35 A Serious Threat 42:45 Talent is Scarce 48:20 Solving the Alignment Problem 59:35 Dealing with Pessimism 1:03:45 The Sliver of Utopia
1:14:00 DEGER TURAN 1:17:00 Solving Human Alignment 1:22:40 Using AI to Solve Problems 1:26:30 AI Objectives Institute 1:31:30 Epistemic Security 1:36:18 Curating AI Content 1:41:00 Scalable Coordination 1:47:15 Building Evolving Systems 1:54:00 Independent Flexible Systems 1:58:30 The Problem is the Solution 2:03:30 A Better Future
----- Resources
Nate Soares https://twitter.com/So8res?s=20
Deger Turan https://twitter.com/degerturann?s=20
MIRI https://intelligence.org/
Less Wrong AI Alignment xhttps://www.lesswrong.com/tag/ai-alignment-intro-materials
AI Objectives Institute https://aiobjectives.org/
------
Not financial or tax advice. This channel is strictly educational and is not investment advice or a solicitation to buy or sell any assets or to make any financial decisions. This video is not tax advice. Talk to your accountant. Do your own research.
Disclosure. From time-to-time I may add links in this newsletter to products I use. I may receive commission if you make a purchase through one of these links. Additionally, the Bankless writers hold crypto assets. See our investment disclosures here: https://www.bankless.com/disclosuresâ
Listen to all your favourite podcasts with AI-powered features
Listen to the best highlights from the podcasts you love and dive into the full episode
Hear something you like? Tap your headphones to save it with AI-generated key takeaways
Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more
Listen to all your favourite podcasts with AI-powered features
Listen to the best highlights from the podcasts you love and dive into the full episode