AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
In the podcast, the assumption was made that AI's have the ability to improve themselves in a big and lumpy way, allowing for broad and significant advancements in their capabilities. However, this assumption is not a given and requires multiple factors to align.
Another key assumption made in the podcast is that owners or creators of the AI would not be able to take control or stop it from doing harmful or destructive actions. This assumption is challenging to accept as it disregards the potential ability of owners to intervene and prevent such actions.
The assumption that an AI would transition into an agent with drastically different goals is another point of contention. While it's possible for AI's to become more advanced and develop different objectives, it is doubtful that they would turn against humanity with the intention to destroy it.
The assumption that AI's would not have internal conflicts and would be able to coordinate effectively is also questionable. In reality, coordination and conflict resolution are significant challenges that exist among humans and would likely affect AI systems as well.
According to Robin Hanson, the universe is filling up with alien civilizations that are expanding at a fast speed. We are early in the grand scheme of things, and in the future, we might be enveloped by these grabby alien civilizations. Despite that, it is believed these civilizations would be interested in preserving us as a data point about other aliens. The rate of spread of these civilizations suggests that we might meet them in roughly a billion years.
The Future of Life Organization released an open letter calling for a pause in general AI experiments, allowing time to address AI alignment issues. Robin Hanson argues that a six-month pause would not provide significant new insights and that longer term pauses would stifle technological progress. He highlights the challenge of global enforcement and suggests that use of AI by corporations is hard to ban. The debate raises questions on the balance between innovation and regulating the risks associated with advancing AI technology.
Robin Hanson's research on grabby aliens suggests that civilizations that expand throughout the universe can be likened to cancer, where a few civilizations go through multiple leaps in development and dominate the universe. Meanwhile, quiet civilizations are rare and may be studied by grabby aliens for insights on different alien civilizations. The model predicts that meeting between grabby and quiet aliens would be rare, but the universe is expected to be half-filled with grabby aliens at the moment.
In this highly anticipated sequel to our 1st AI conversation with Eliezer Yudkowsky, we bring you a thought-provoking discussion with Robin Hanson, a professor of economics at George Mason University and a research associate at the Future of Humanity Institute of Oxford University.
Eliezer painted a chilling and grim picture of a future where AI ultimately kills us all. Robin is here to provide a different perspective.
------ ✨ DEBRIEF | Unpacking the episode: https://www.bankless.com/debrief-robin-hanson ------ ✨ COLLECTIBLES | Collect this episode: https://collectibles.bankless.com/mint
------ ✨ NEW BANKLESS PRODUCT | Token Hub https://bankless.cc/TokenHubRSS
------ In this episode, we explore:
- Why Robin believes Eliezer is wrong and that we're not all going to die from an AI takeover. But will we potentially become their pets instead? - The possibility of a civil war between multiple AIs and why it's more likely than being dominated by a single superintelligent AI. - Robin's concerns about the regulation of AI and why he believes it's a greater threat than AI itself. - A fascinating analogy: why Robin thinks alien civilizations might spread like cancer? - Finally, we dive into the world of crypto and explore Robin's views on this rapidly evolving technology.
Whether you're an AI enthusiast, a crypto advocate, or just someone intrigued by the big-picture questions about humanity and its prospects, this episode is one you won't want to miss.
------ BANKLESS SPONSOR TOOLS:
⚖️ ARBITRUM | SCALING ETHEREUM https://bankless.cc/Arbitrum
🐙KRAKEN | MOST-TRUSTED CRYPTO EXCHANGE https://bankless.cc/kraken
🦄UNISWAP | ON-CHAIN MARKETPLACE https://bankless.cc/uniswap
👻 PHANTOM | FRIENDLY MULTICHAIN WALLET https://bankless.cc/phantom-waitlist
🦊METAMASK LEARN | HELPFUL WEB3 RESOURCE https://bankless.cc/MetaMask
------ Topics Covered
0:00 Intro 8:42 How Robin is Weird 10:00 Are We All Going to Die? 13:50 Eliezer’s Assumption 25:00 Intelligence, Humans, & Evolution 27:31 Eliezer Counter Point 32:00 Acceleration of Change 33:18 Comparing & Contrasting Eliezer’s Argument 35:45 A New Life Form 44:24 AI Improving Itself 47:04 Self Interested Acting Agent 49:56 Human Displacement? 55:56 Many AIs 1:00:18 Humans vs. Robots 1:04:14 Pause or Continue AI Innovation? 1:10:52 Quiet Civilization 1:14:28 Grabby Aliens 1:19:55 Are Humans Grabby? 1:27:29 Grabby Aliens Explained 1:36:16 Cancer 1:40:00 Robin’s Thoughts on Crypto 1:42:20 Closing & Disclaimers
------ Resources:
Robin Hanson https://twitter.com/robinhanson
Eliezer Yudkowsky on Bankless https://www.bankless.com/159-were-all-gonna-die-with-eliezer-yudkowsky
What is the AI FOOM debate? https://www.lesswrong.com/tag/the-hanson-yudkowsky-ai-foom-debate
Age of Em book - Robin Hanson https://ageofem.com/
Grabby Aliens https://grabbyaliens.com/
Kurzgesagt video https://www.youtube.com/watch?v=GDSf2h9_39I&t=1s
----- Not financial or tax advice. This channel is strictly educational and is not investment advice or a solicitation to buy or sell any assets or to make any financial decisions. This video is not tax advice. Talk to your accountant. Do your own research.
Disclosure. From time-to-time I may add links in this newsletter to products I use. I may receive commission if you make a purchase through one of these links. Additionally, the Bankless writers hold crypto assets. See our investment disclosures here: https://www.bankless.com/disclosures
Listen to all your favourite podcasts with AI-powered features
Listen to the best highlights from the podcasts you love and dive into the full episode
Hear something you like? Tap your headphones to save it with AI-generated key takeaways
Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more
Listen to all your favourite podcasts with AI-powered features
Listen to the best highlights from the podcasts you love and dive into the full episode