Futurists David Wood & Gerd Leonhard Discuss Artificial General Intelligence (livestream Edit)
Jul 22, 2024
auto_awesome
David Wood, a noted futurist in AI, joins Gerd Leonhard, a prominent voice on tech's societal impact, for a thought-provoking discussion. They argue against the unchecked development of Artificial General Intelligence (AGI), promoting a non-proliferation agreement to safeguard humanity. The duo delves into the vital balance between AI's potential benefits and existential risks, the importance of sustainable capitalism, and ethical considerations in AI technologies. They also highlight 'protopia,' advocating for collective action towards positive societal change.
The podcast emphasizes the necessity of clear regulations and standards for IA/AI, while expressing concerns over the existential risks of AGI development by private entities.
A shift towards an economic model focused on sustainability and inclusiveness is vital to ensure responsible technological advancements and mitigate the risks of AI.
Deep dives
The Optimistic Future Despite Challenges
The discussion emphasizes a hopeful perspective on the future, countering the prevalent negative outlook often driven by potential disasters. By highlighting the importance of a more naive, yet optimistic approach to technological advancement, it suggests that human capacity for collaboration and problem-solving should not be underestimated. This optimism is grounded in the belief that despite existing challenges, humanity has a remarkable ability to innovate and improve circumstances through technology. The conversation points out that acknowledging both risks and solutions can pave the way for a more favorable outcome for society.
Challenges of Regulating AI Technology
The conversation delves into the complexity of establishing treaties to regulate AI, comparing it to the Nuclear Non-Proliferation Treaty but underscoring the unique challenges posed by artificial intelligence. The discussion highlights three significant factors complicating regulation: the difficulty in monitoring AI, the inherent pressures to innovate rapidly, and the open-source nature of many AI technologies. These challenges make it difficult to enforce compliance, as entities may prioritize immediate commercial success over ethical considerations and collaborative safety measures. Overall, there is a recognition that a fundamental shift in the economic paradigm may be necessary to align motivations with sustainable development and safety.
The Need for a New Economic Paradigm
The dialogue emphasizes that the current capitalist framework may be unsuited to the emerging challenges of AI and suggests the need for an economic model centered around sustainability, inclusiveness, and greater human consideration. The prevailing focus on profit, power, and competitive superiority is seen as potentially detrimental, as it may lead to reckless technological development without adequate ethical constraints. The potential for tremendous wealth generation through AI is acknowledged, but it is argued that new values must be prioritized to avoid pitfalls experienced with climate change and other global crises. Proposals for shifts towards a model valuing people, the planet, and purpose are crucial for guiding responsible technological advancements.
Education and Awareness as Key Components
The discussion identifies education as a cornerstone for fostering a more informed society that can navigate the complexities of technology and its implications. A call to action urges the importance of sharing knowledge about potential risks and solutions associated with AI, particularly targeting younger generations who may feel a sense of disillusionment about the future. By instilling hope, promoting collaboration, and highlighting successful innovations, society can strive towards a brighter future. Moreover, the campaign aims to inspire action and optimism, paralleling historical movements that reshaped public sentiment positively.
This is an edited version of our livestream Q&A sessions with my guest David Wood on July 18 and 19, 2024.
@LondonFuturists ' David Wood joined me as special guest on this live-show. Thank you!
You can also watch it on my YouTube channel here https://www.youtube.com/watch?v=yYyTIky2MLc or the whole thing here https://www.youtube.com/watch?v=W3dRQ7QZ_wc
In this special livestream event, I outlined my arguments that while IA (Intelligent Assistance) and some forms of narrow AI may well be quite beneficial to humanity, the idea of building AGIs i.e. 'generally intelligent digital entities' (as set forth by Sam Altman / #openai and others) represents an existential risk that imho should not be undertaken or self-governed by private enterprises, multinational corporations or venture-capital funded startups.
So: IA /AI yes *but with clear rules, standards, and guardrails. AGI: NO, unless we're all on the same page.
l explain why I believe we need an AGI-Non-Proliferation-Agreement, what the difference between IA/AI and AGI or ASI (superintelligence) is, why it matters and how we could go about it.
Get the Snipd podcast app
Unlock the knowledge in podcasts with the podcast player of the future.
AI-powered podcast player
Listen to all your favourite podcasts with AI-powered features
Discover highlights
Listen to the best highlights from the podcasts you love and dive into the full episode
Save any moment
Hear something you like? Tap your headphones to save it with AI-generated key takeaways
Share & Export
Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more
AI-powered podcast player
Listen to all your favourite podcasts with AI-powered features
Discover highlights
Listen to the best highlights from the podcasts you love and dive into the full episode