Joscha Bach, a leading cognitive scientist and AI researcher, discusses how general intelligence emerges from civilization rather than individuals. He envisions a future where humans and AI coexist harmoniously but warns that global regulation of AI is unrealistic. Connor Leahy, CEO of Conjecture, believes humanity has more control over its AI destiny than commonly assumed, pushing for beneficial AGI development. They explore the ethical responsibilities, risks of bias in AI, and the philosophical implications of aligning AI with human values, urging a deeper understanding of technology's trajectory.
01:31:28
forum Ask episode
web_stories AI Snips
view_agenda Chapters
auto_awesome Transcript
info_circle Episode notes
insights INSIGHT
Civilizational Intelligence
General intelligence emerges from civilization, not individuals.
Humans, with biological constraints, struggle to achieve high general intelligence alone.
insights INSIGHT
Coherence vs. Diversity
Maintaining diverse perspectives is crucial, as a single, coherent worldview limits exploration.
Consciousness helps individuals maintain internal coherence but can hinder societal progress.
question_answer ANECDOTE
Eliezer Yudkowsky's Influence
Joscha Bach appreciates Eliezer Yudkowsky's work on AI safety.
He finds that many counterarguments against Yudkowsky fail to address the core of his concerns.
Get the Snipd Podcast app to discover more snips from this episode
Support us! https://www.patreon.com/mlst
MLST Discord: https://discord.gg/aNPkGUQtc5
Twitter: https://twitter.com/MLStreetTalk
The first 10 mins of audio from Joscha isn't great, it improves after.
Transcript and longer summary: https://docs.google.com/document/d/1TUJhlSVbrHf2vWoe6p7xL5tlTK_BGZ140QqqTudF8UI/edit?usp=sharing
Dr. Joscha Bach argued that general intelligence emerges from civilization, not individuals. Given our biological constraints, humans cannot achieve a high level of general intelligence on our own. Bach believes AGI may become integrated into all parts of the world, including human minds and bodies. He thinks a future where humans and AGI harmoniously coexist is possible if we develop a shared purpose and incentive to align. However, Bach is uncertain about how AI progress will unfold or which scenarios are most likely.
Bach argued that global control and regulation of AI is unrealistic. While regulation may address some concerns, it cannot stop continued progress in AI. He believes individuals determine their own values, so "human values" cannot be formally specified and aligned across humanity. For Bach, the possibility of building beneficial AGI is exciting but much work is still needed to ensure a positive outcome.
Connor Leahy believes we have more control over the future than the default outcome might suggest. With sufficient time and effort, humanity could develop the technology and coordination to build a beneficial AGI. However, the default outcome likely leads to an undesirable scenario if we do not actively work to build a better future. Leahy thinks finding values and priorities most humans endorse could help align AI, even if individuals disagree on some values.
Leahy argued a future where humans and AGI harmoniously coexist is ideal but will require substantial work to achieve. While regulation faces challenges, it remains worth exploring. Leahy believes limits to progress in AI exist but we are unlikely to reach them before humanity is at risk. He worries even modestly superhuman intelligence could disrupt the status quo if misaligned with human values and priorities.
Overall, Bach and Leahy expressed optimism about the possibility of building beneficial AGI but believe we must address risks and challenges proactively. They agreed substantial uncertainty remains around how AI will progress and what scenarios are most plausible. But developing a shared purpose between humans and AI, improving coordination and control, and finding human values to help guide progress could all improve the odds of a beneficial outcome. With openness to new ideas and willingness to consider multiple perspectives, continued discussions like this one could help ensure the future of AI is one that benefits and inspires humanity.
TOC:
00:00:00 - Introduction and Background
00:02:54 - Different Perspectives on AGI
00:13:59 - The Importance of AGI
00:23:24 - Existential Risks and the Future of Humanity
00:36:21 - Coherence and Coordination in Society
00:40:53 - Possibilities and Future of AGI
00:44:08 - Coherence and alignment
01:08:32 - The role of values in AI alignment
01:18:33 - The future of AGI and merging with AI
01:22:14 - The limits of AI alignment
01:23:06 - The scalability of intelligence
01:26:15 - Closing statements and future prospects