
Machine Learning Street Talk (MLST)
Showdown Between e/acc Leader And Doomer - Connor Leahy + Beff Jezos
Episode guests
Podcast summary created with Snipd AI
Quick takeaways
- The e/acc movement promotes acceleration of technology and growth while emphasizing the need to balance the natural human fear of the unknown.
- Caution and preparation for risks associated with emerging technologies, including AI and memetic systems, are crucial.
- The debate on regulation brings up questions about the values instilled in society and the role of adaptive and nuanced approaches in technology development.
- Understanding the potential dangers and uncertainties of AI requires a balance between risk-taking and thorough study.
- Maintaining a hierarchical cybernetic control structure in civilization allows for adaptability and prevents concentration of power.
- The importance of decentralization, competition, and distributed control in handling powerful technologies is emphasized to avoid excessive control and promote collaboration.
Deep dives
The EAC movement and fostering acceleration
The main idea behind the EAC (effective acceleration movement) is to promote acceleration at the expense of everything else. The movement is seen as a techno libertarian capitalist group aiming for rapid growth and development. One point of debate is whether this approach instills the right values in society. The EAC focuses on the acceleration of technology and believes that there is potential upside in various areas that we may be leaving untapped. They argue for a balancing force against the natural human tendency to fear the unknown and be overly cautious.
The perspective of Connolly Lee on risks and threats
Connolly Lee expresses concern about the risks associated with AI and other systems. He believes that we need to proceed with caution and be prepared for the risks that come with emerging technologies. Lee emphasizes that risks are not limited to AI systems alone but also extend to memetic systems. He advocates for a more measured approach and urges society to avoid stumbling upon threats that could have devastating consequences.
The importance of regulation and societal values
The debate between Connolly Lee and Beth Jesos touches on the topic of regulation and its impact on societal values. Lee questions the idea of just letting things unfold naturally based on the lens of physics. He raises concerns about the right values instilled in society when only considering the cold, hard lens of physics. Jesos argues for a more adaptive and nuanced approach, suggesting that the market and civilization tend to develop technologies with positive utility and growth. Both sides discuss the need for dynamic legislation and the challenge of finding the optimal balance.
The potential dangers and uncertainties of AI
The conversation expands to the potential dangers and uncertainties surrounding AI. Beth Jesos acknowledges the need for risk-taking and exploring new frontiers in AI and technology in order to propel growth and development. However, Connolly Lee expresses caution and highlights the importance of thoroughly studying and understanding emerging technologies before fully embracing them. The debate unfolds around the balance between minimizing risk and maximizing the potential benefits of technological advancements.
The Importance of Hierarchical Cybernetic Control in Civilization
One key point discussed in the podcast is the importance of maintaining a hierarchical cybernetic control structure in civilization. The speaker argues that this structure allows for fault tolerance and adaptation, preventing the concentration of power in a single centralized entity. They emphasize the need for a careful balance between order and entropy, highlighting that suppressing variance and monopolizing power can lead to a loss of adaptability and potential negative consequences. The goal is to ensure that power gradients are maintained but not excessively sharp, allowing for competition and multiple nodes of control.
Concerns about Centralized AI Safety Research
Another topic discussed is the potential risks associated with centralized AI safety research. The speaker raises concerns about the effectiveness and safety of such a centralized approach, drawing parallels to centralized biosecurity labs and their past leaks. They argue that a central authority's control over AI and information landscapes could be manipulated, leading to adversarial manipulation and manufacturing of consent. They propose being cautious with legislation that crystallizes certain regulations, advocating for light-touch regulation while acknowledging the need for careful consideration of potential positive effects enabled by high compute AI research.
The Limitations of Current Institutions
The podcast also delves into the limitations of current institutions to handle powerful technologies. Both speakers express a lack of trust in the competence and effectiveness of current institutions and leaders to navigate the challenges that come with advanced technology. They highlight a need for improved institutions, adaptive frameworks, and better decision-making processes to mitigate risks and ensure responsible development.
The Call for Decentralization and Distributed Control
There is a shared belief in the podcast concerning the importance of decentralization and distributed control in handling powerful technologies. The speakers argue against monopolizing intelligence and propose a future where access to advanced technology is available to many, ensuring that no single entity has excessive control. They stress the value of competition, variance, and maintaining a balance between control and adaptability to avoid the concentration of power and promote a more equitable and collaborative future.
Adapting Institutions and the Importance of Competition
In this podcast episode, the importance of adapting institutions and promoting competition is discussed. The speaker believes that building a good world is a complex puzzle that requires constant adaptation and innovation. They argue that current institutions are not effective in solving the challenges we face and that encouraging competition and alternative institutions is crucial for progress. They emphasize the need for an adaptive and dynamic approach to policy-making, acknowledging the high uncertainty and complexity of the future. While they recognize the potential benefits of regulation and legislation, they caution against overregulation and the concentration of power, calling for a balance between stability and innovation.
Uncertainty in Predicting the Future and the Need for Agility
The podcast explores the idea that predicting the future and designing long-term policies are challenging due to the high uncertainty of outcomes. It highlights the need for agility and adaptive decision-making, given the dynamic nature of technological advancements and changing landscapes. The speaker argues that crystallizing policies too early might lead to suboptimal results and advocates for maintaining a flexible approach. They emphasize the importance of constantly gathering data, testing hypotheses, and adjusting strategies accordingly.
The Role of Information and Access in Policy Design
The discussion delves into the role of information and access in policy design. The speaker emphasizes the significance of decentralized decision-making and access to information for consumers and institutions. They propose that a healthy market of institutions can be achieved by allowing alternative institutions to compete and improve the current landscape. The podcast also touches upon the risks of regulatory capture and the need to prevent power asymmetry in society. The speaker highlights the importance of maintaining information symmetry and the value of current institutions adapting to the changing technological landscape.
Striking a Balance Between Regulation and Innovation
The podcast concludes with a debate on the optimal level of regulation and the trade-off between stability and innovation. The speaker acknowledges the need for regulations that address market failures and externalities but also expresses skepticism about heavy-handed approaches. They argue that it is essential to strike a balance and avoid excessively restrictive regulations. They advocate for considering the uncertainty of the future when implementing policies and emphasize the value of market-based disruption and constant innovation for societal progress.
The world's second-most famous AI doomer Connor Leahy sits down with Beff Jezos, the founder of the e/acc movement debating technology, AI policy, and human values. As the two discuss technology, AI safety, civilization advancement, and the future of institutions, they clash on their opposing perspectives on how we steer humanity towards a more optimal path.
Watch behind the scenes, get early access and join the private Discord by supporting us on Patreon. We have some amazing content going up there with Max Bennett and Kenneth Stanley this week! https://patreon.com/mlst (public discord) https://discord.gg/aNPkGUQtc5 https://twitter.com/MLStreetTalk
Post-interview with Beff and Connor: https://www.patreon.com/posts/97905213
Pre-interview with Connor and his colleague Dan Clothiaux: https://www.patreon.com/posts/connor-leahy-and-97631416
Leahy, known for his critical perspectives on AI and technology, challenges Jezos on a variety of assertions related to the accelerationist movement, market dynamics, and the need for regulation in the face of rapid technological advancements. Jezos, on the other hand, provides insights into the e/acc movement's core philosophies, emphasizing growth, adaptability, and the dangers of over-legislation and centralized control in current institutions.
Throughout the discussion, both speakers explore the concept of entropy, the role of competition in fostering innovation, and the balance needed to mediate order and chaos to ensure the prosperity and survival of civilization. They weigh up the risks and rewards of AI, the importance of maintaining a power equilibrium in society, and the significance of cultural and institutional dynamism.
Beff Jezos (Guillaume Verdon): https://twitter.com/BasedBeffJezos https://twitter.com/GillVerd Connor Leahy: https://twitter.com/npcollapse
YT: https://www.youtube.com/watch?v=0zxi0xSBOaQ
TOC:
00:00:00 - Intro
00:03:05 - Society library reference
00:03:35 - Debate starts
00:05:08 - Should any tech be banned?
00:20:39 - Leaded Gasoline
00:28:57 - False vacuum collapse method?
00:34:56 - What if there are dangerous aliens?
00:36:56 - Risk tolerances
00:39:26 - Optimizing for growth vs value
00:52:38 - Is vs ought
01:02:29 - AI discussion
01:07:38 - War / global competition
01:11:02 - Open source F16 designs
01:20:37 - Offense vs defense
01:28:49 - Morality / value
01:43:34 - What would Conor do
01:50:36 - Institutions/regulation
02:26:41 - Competition vs. Regulation Dilemma
02:32:50 - Existential Risks and Future Planning
02:41:46 - Conclusion and Reflection
Note from Tim: I baked the chapter metadata into the mp3 file this time, does that help the chapters show up in your app? Let me know. Also I accidentally exported a few minutes of dead audio at the end of the file - sorry about that just skip on when the episode finishes.