undefined

Connor Leahy

Machine learning engineer from Aleph Alpha and founder of EleutherAI, specializing in AGI and AI alignment.

Top 10 podcasts with Connor Leahy

Ranked by the Snipd community
undefined
59 snips
Apr 21, 2024 • 1h 20min

Connor Leahy - e/acc, AGI and the future.

In this discussion, Connor Leahy, CEO of Conjecture, dives into the complexities of AI alignment and the risks of advanced technologies, alongside technical researcher Daniel Clothiaux and AI alignment advocate Beff Jezos. They explore societal and cultural implications of AI, the importance of coherence in technology, and the potential for AI to develop agency. The conversation also addresses the widening gap between societal classes and the critical need for resilient institutions to navigate technological challenges, making a case for equitable opportunities and collaboration.
undefined
55 snips
Apr 13, 2023 • 1h 37min

Connor Leahy on AGI and Cognitive Emulation

Connor Leahy joins the podcast to discuss GPT-4, magic, cognitive emulation, demand for human-like AI, and aligning superintelligence. You can read more about Connor's work at https://conjecture.dev Timestamps: 00:00 GPT-4 16:35 "Magic" in machine learning 27:43 Cognitive emulations 38:00 Machine learning VS explainability 48:00 Human data = human AI? 1:00:07 Analogies for cognitive emulations 1:26:03 Demand for human-like AI 1:31:50 Aligning superintelligence Social Media Links: ➡️ WEBSITE: https://futureoflife.org ➡️ TWITTER: https://twitter.com/FLIxrisk ➡️ INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/ ➡️ META: https://www.facebook.com/futureoflifeinstitute ➡️ LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/
undefined
49 snips
Jun 26, 2023 • 1h 35min

177 - AI is a Ticking Time Bomb with Connor Leahy

Connor Leahy, CEO of Conjecture and co-founder of EleutherAI, delves into the pressing issues surrounding AI safety and alignment. He discusses the potential existential threats posed by advanced AI and why regulatory measures are crucial for our future. The conversation highlights the ideological motivations driving tech leaders in the arm race for AGI, alongside the importance of ethical considerations in AI development. Leahy shares insights on the exponential growth of AI power and emphasizes the need for accountability in navigating these risks.
undefined
31 snips
Apr 2, 2023 • 2h 40min

#112 AVOIDING AGI APOCALYPSE - CONNOR LEAHY

In this engaging conversation, Connor Leahy, CEO of Conjecture and AI safety advocate, shares his insights on the potential risks of artificial general intelligence (AGI). He stresses the crucial need for AI alignment and the importance of empathy in understanding these systems. The discussion dives into the complexities of AI training, the dangers of dehumanizing biases, and the challenges of balancing research with product development. Connor also reflects on personal growth, storytelling, and the pressing need to alleviate human suffering as we navigate the rapidly evolving AI landscape.
undefined
23 snips
May 19, 2023 • 1h 41min

E26: [Bonus Episode] Connor Leahy on AGI, GPT-4, and Cognitive Emulation w/ FLI Podcast

Connor Leahy, CEO of Conjecture, dives into the profound implications of AI, especially with GPT-4’s advancements. He discusses the rising public concern surrounding AI capabilities and the ethical necessity of AI alignment. The conversation highlights cognitive emulation as a pivotal approach to developing human-like reasoning in AI. Leahy also examines the limitations of AI in replicating human intuition and the vital need for transparency and accountability in AI systems, calling for responsible innovation in this rapidly evolving field.
undefined
21 snips
Jan 26, 2023 • 1h 5min

Connor Leahy on AI Safety and Why the World is Fragile

Connor Leahy from Conjecture joins the podcast to discuss AI safety, the fragility of the world, slowing down AI development, regulating AI, and the optimal funding model for AI safety research. Learn more about Connor's work at https://conjecture.dev Timestamps: 00:00 Introduction 00:47 What is the best way to understand AI safety? 09:50 Why is the world relatively stable? 15:18 Is the main worry human misuse of AI? 22:47 Can humanity solve AI safety? 30:06 Can we slow down AI development? 37:13 How should governments regulate AI? 41:09 How do we avoid misallocating AI safety government grants? 51:02 Should AI safety research be done by for-profit companies? Social Media Links: ➡️ WEBSITE: https://futureoflife.org ➡️ TWITTER: https://twitter.com/FLIxrisk ➡️ INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/ ➡️ META: https://www.facebook.com/futureoflifeinstitute ➡️ LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/
undefined
21 snips
Nov 28, 2020 • 2h 44min

#031 WE GOT ACCESS TO GPT-3! (With Gary Marcus, Walid Saba and Connor Leahy)

This conversation features Gary Marcus, a psychology and neuroscience professor, known for critiquing deep learning, alongside Waleed Sabah, an expert in natural language understanding, and Connor Leahy, a proponent of large language models. They dive into GPT-3's strengths and weaknesses, the philosophical implications of AI creativity, and the importance of integrating reasoning with pattern recognition. The dialogue also critiques AI's limitations in understanding language and explores future possibilities for achieving true artificial general intelligence.
undefined
20 snips
Apr 20, 2023 • 52min

Connor Leahy on the State of AI and Alignment Research

Connor Leahy joins the podcast to discuss the state of the AI. Which labs are in front? Which alignment solutions might work? How will the public react to more capable AI? You can read more about Connor's work at https://conjecture.dev Timestamps: 00:00 Landscape of AI research labs 10:13 Is AGI a useful term? 13:31 AI predictions 17:56 Reinforcement learning from human feedback 29:53 Mechanistic interpretability 33:37 Yudkowsky and Christiano 41:39 Cognitive Emulations 43:11 Public reactions to AI Social Media Links: ➡️ WEBSITE: https://futureoflife.org ➡️ TWITTER: https://twitter.com/FLIxrisk ➡️ INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/ ➡️ META: https://www.facebook.com/futureoflifeinstitute ➡️ LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/
undefined
18 snips
Jun 21, 2023 • 1h 25min

Will AI destroy civilization in the near future? (with Connor Leahy)

Connor Leahy, CEO of Conjecture and former leader of EleutherAI, dives into the pressing existential risks of AI and why they may be more immediate than many think. He discusses misconceptions about AI controllability and the challenges of understanding neural networks. The conversation touches on technologies like AutoGPT and the importance of empathy in addressing these risks. Connor also highlights actionable steps people can take to ensure responsible AI development, advocating for societal engagement and robust safety measures.
undefined
18 snips
Jun 20, 2023 • 1h 31min

Joscha Bach and Connor Leahy on AI risk

Joscha Bach, a leading cognitive scientist and AI researcher, discusses how general intelligence emerges from civilization rather than individuals. He envisions a future where humans and AI coexist harmoniously but warns that global regulation of AI is unrealistic. Connor Leahy, CEO of Conjecture, believes humanity has more control over its AI destiny than commonly assumed, pushing for beneficial AGI development. They explore the ethical responsibilities, risks of bias in AI, and the philosophical implications of aligning AI with human values, urging a deeper understanding of technology's trajectory.