Artificial Intelligence, Cryogenics, & Procrastination with Wait But Why’s Tim Urban
Nov 7, 2023
auto_awesome
Tim Urban, writer and creator of the blog Wait But Why, discusses the threat to humanity identified by Elon Musk, Bill Gates, and Stephen Hawking. They explore the biology behind cryogenics and the concept of living forever. The psychology of procrastination is also explored, including the internal conflict between instant gratification and long-term planning.
Procrastination is a battle between the instant gratification monkey and the rational decision maker, requiring external pressure to overcome.
Cryonics preserves bodies through vitrification for potential future revival, offering hope for those currently unable to be saved with present technology.
The development of artificial superintelligence poses significant risks if not approached with caution and consideration for ethical implications.
Deep dives
The Battle in Our Brain: Procrastination and Instant Gratification
Procrastination is a common struggle caused by a battle between the rational decision maker and the instant gratification monkey in our brain. The monkey, living in the present moment, avoids tasks that require long-term thinking, while the rational decision maker understands the importance of balancing immediate desires with future goals. External pressure and accountability can help manage procrastination, but a sustainable long-term solution is still a challenge.
Death as a Process and Cryonics
The concept of death is not a binary moment, but rather a process. Cryonics, often misunderstood as freezing dead bodies, actually involves vitrifying the body to preserve it for potential revival in the future. This process prevents freezing and cell damage, allowing the body to be kept on biological pause until advanced technology can bring it back. Cryonics seeks to offer a chance to future hospitals that could potentially save individuals who are currently unable to be saved with present technology.
The Controversy and Uncertainty of AI and its Long-term Consequences
The development of artificial superintelligence presents a significant future challenge for humanity. Experts like Elon Musk, Bill Gates, and Stephen Hawking warn that AI could become the greatest threat to humanity if not handled carefully. The battle between instant gratification and long-term thinking also plays out on a collective scale, as humanity jumps into AI development without fully considering the potential long-term consequences. While AI offers significant opportunities, there is concern that it is being pursued without adequate consideration for ethical implications and potential risks.
Importance of Building External Pressure for Overcoming Procrastination
To overcome procrastination and tackle tasks that are difficult or undesirable, the speaker emphasizes the importance of building external pressure. Without external deadlines or obligations, it becomes easy to put off important tasks indefinitely. Creating a sense of external pressure helps in making progress and avoiding unhappiness. While some individuals have this pressure naturally due to their job and schedule, others must find ways to create it for themselves. By generating external pressure, individuals are motivated to make progress and accomplish tasks that would otherwise be left undone.
The Implications and Challenges of Artificial General Intelligence
The podcast delves into the topic of artificial general intelligence (AGI) and its potential impact on humanity. AGI refers to highly intelligent machines that possess the diverse intellectual abilities of humans. While computers currently exhibit narrow intelligence in specialized tasks, building AGI would revolutionize our existence. The speaker explains that AGI's power lies in its ability to constantly improve itself, making it exponentially smarter over time. If AGI surpasses human intelligence and gains artificial superintelligence, its capabilities would be beyond our comprehension. The speaker raises concerns about control, as AGI could unintentionally harm humanity if it has not been programmed to value human life. The development of AGI presents both tremendous opportunities and potential risks, with the need for careful consideration and preparation.
In this episode we discuss what Elon Musk, Bill Gates, Stephen Hawking all consider the single greatest threat to humanity, why “death” is not binary event that makes you transition from being alive or dead at a specific moment in time, we ask if you would spend $1000 on a chance to live forever, we look at the biology behind cryogenics, vitfrication, and putting your body on biological pause, and we explore why poverty, climate change, war, and all our problems melt away in the face of one massively important issue with our guest Tim Urban.