undefined

Connor Leahy

Co-author of the LessWrong post, "Conjecture: A Roadmap for Cognitive Software and A Humanist Future of AI". Focuses on cognitive software and safe AI development.

Top 10 podcasts with Connor Leahy

Ranked by the Snipd community
undefined
49 snips
Jun 26, 2023 • 1h 35min

177 - AI is a Ticking Time Bomb with Connor Leahy

AI is here to stay, but at what cost? Connor Leahy is the CEO of Conjecture, a mission-driven organization that’s trying to make the future of AI go as well as it possibly can. He is also a Co-Founder of EleutherAI, an open-source AI research non-profit lab. In today’s episode, Connor and David cover:  1) The intuitive arguments behind the AI Safety debate 2) The two defining categories of ways AI could end all of humanity 3) The major players in the race towards AGI, and why they all seem to be ideologically motivated, rather than financially motivated  4) Why the progress of AI power is based on TWO exponential curves 5) Why Connor thinks government regulation is the easiest and most effective way of buying us time  ------ 🚀 Unlock $3,000+ in Perks with Bankless Citizenship 🚀 https://bankless.cc/GetThePerks  ------ 📣 CYFRIN | Smart Contract Audits & Solidity Course https://bankless.cc/cyfrin  ------ BANKLESS SPONSOR TOOLS:  🐙KRAKEN | MOST-TRUSTED CRYPTO EXCHANGE https://k.xyz/bankless-pod-q2  🦊METAMASK LEARN | HELPFUL WEB3 RESOURCE https://bankless.cc/MetaMask   ⚖️ ARBITRUM | SCALING ETHEREUM https://bankless.cc/Arbitrum   🧠 AMBIRE | SMART CONTRACT WALLET https://bankless.cc/Ambire  🦄UNISWAP | ON-CHAIN MARKETPLACE https://bankless.cc/uniswap  🛞MANTLE | MODULAR LAYER 2 NETWORK https://bankless.cc/Mantle  ----------- TIMESTAMPS 0:00 Intro 3:12 AI Alignment Importance 9:40 Finding Neutrality 14:16 AI Doom Scenarios 21:06 How AI Misalignment Evolves 25:56 The State of AI Alignment 32:07 The AI Race Trap 41:49 Motivations of the AI Race 56:18 AI Regulation Efforts 1:14:28 How AI Regulation & Crypto Compare 1:21:44 AI Teachings of Human Coordination 1:36:53 Closing & Disclaimers ----------- RESOURCES Connor Leahy https://twitter.com/NPCollapse   Conjecture Research https://www.conjecture.dev/research/   EleutherAI Discord https://discord.com/invite/zBGx3azzUn   Stop AGI https://www.stop.ai/   ----------- Related Episodes: We’re All Gonna Die with Eliezer Yudkowsky https://www.youtube.com/watch?v=gA1sNLL6yg4  How We Prevent the AI’s from Killing us with Paul Christiano https://www.youtube.com/watch?v=GyFkWb903aU  ----------- Not financial or tax advice. This channel is strictly educational and is not investment advice or a solicitation to buy or sell any assets or to make any financial decisions. This video is not tax advice. Talk to your accountant. Do your own research. Disclosure. From time-to-time I may add links in this newsletter to products I use. I may receive commission if you make a purchase through one of these links. Additionally, the Bankless writers hold crypto assets. See our investment disclosures here: https://www.bankless.com/disclosures 
undefined
45 snips
Apr 21, 2024 • 1h 20min

Connor Leahy - e/acc, AGI and the future.

Connor Leahy, CEO of Conjecture, discusses AI systems developing agency, coherence in technology, and the role of institutions in handling risks. He explores concerns about AI widening inequality and emphasizes equal access to opportunities. Leahy's perspective on life as a process that 'rides entropy' and balancing coherence with variance in exploring potential upsides are key highlights.
undefined
31 snips
Apr 2, 2023 • 2h 40min

#112 AVOIDING AGI APOCALYPSE - CONNOR LEAHY

Support us! https://www.patreon.com/mlst MLST Discord: https://discord.gg/aNPkGUQtc5 In this podcast with the legendary Connor Leahy (CEO Conjecture) recorded in Dec 2022, we discuss various topics related to artificial intelligence (AI), including AI alignment, the success of ChatGPT, the potential threats of artificial general intelligence (AGI), and the challenges of balancing research and product development at his company, Conjecture. He emphasizes the importance of empathy, dehumanizing our thinking to avoid anthropomorphic biases, and the value of real-world experiences in learning and personal growth. The conversation also covers the Orthogonality Thesis, AI preferences, the mystery of mode collapse, and the paradox of AI alignment. Connor Leahy expresses concern about the rapid development of AI and the potential dangers it poses, especially as AI systems become more powerful and integrated into society. He argues that we need a better understanding of AI systems to ensure their safe and beneficial development. The discussion also touches on the concept of "futuristic whack-a-mole," where futurists predict potential AGI threats, and others try to come up with solutions for those specific scenarios. However, the problem lies in the fact that there could be many more scenarios that neither party can think of, especially when dealing with a system that's smarter than humans. https://www.linkedin.com/in/connor-j-leahy/https://twitter.com/NPCollapse Interviewer: Dr. Tim Scarfe (Innovation CTO @ XRAI Glass https://xrai.glass/) TOC: The success of ChatGPT and its impact on the AI field [00:00:00] Subjective experience [00:15:12] AI Architectural discussion including RLHF [00:18:04] The paradox of AI alignment and the future of AI in society [00:31:44] The impact of AI on society and politics [00:36:11] Future shock levels and the challenges of predicting the future [00:45:58] Long termism and existential risk [00:48:23] Consequentialism vs. deontology in rationalism [00:53:39] The Rationalist Community and its Challenges [01:07:37] AI Alignment and Conjecture [01:14:15] Orthogonality Thesis and AI Preferences [01:17:01] Challenges in AI Alignment [01:20:28] Mechanistic Interpretability in Neural Networks [01:24:54] Building Cleaner Neural Networks [01:31:36] Cognitive horizons / The problem with rapid AI development [01:34:52] Founding Conjecture and raising funds [01:39:36] Inefficiencies in the market and seizing opportunities [01:45:38] Charisma, authenticity, and leadership in startups [01:52:13] Autistic culture and empathy [01:55:26] Learning from real-world experiences [02:01:57] Technical empathy and transhumanism [02:07:18] Moral status and the limits of empathy [02:15:33] Anthropomorphic Thinking and Consequentialism [02:17:42] Conjecture: Balancing Research and Product Development [02:20:37] Epistemology Team at Conjecture [02:31:07] Interpretability and Deception in AGI [02:36:23] Futuristic whack-a-mole and predicting AGI threats [02:38:27] Refs: 1. OpenAI's ChatGPT: https://chat.openai.com/ 2. The Mystery of Mode Collapse (Article): https://www.lesswrong.com/posts/t9svvNPNmFf5Qa3TA/mysteries-of-mode-collapse 3. The Rationalist Guide to the Galaxy https://www.amazon.co.uk/Does-Not-Hate-You-Superintelligence/dp/1474608795 5. Alfred Korzybski: https://en.wikipedia.org/wiki/Alfred_Korzybski 6. Instrumental Convergence: https://en.wikipedia.org/wiki/Instrumental_convergence 7. Orthogonality Thesis: https://en.wikipedia.org/wiki/Orthogonality_thesis 8. Brian Tomasik's Essays on Reducing Suffering: https://reducing-suffering.org/ 9. Epistemological Framing for AI Alignment Research: https://www.lesswrong.com/posts/Y4YHTBziAscS5WPN7/epistemological-framing-for-ai-alignment-research 10. How to Defeat Mind readers: https://www.alignmentforum.org/posts/EhAbh2pQoAXkm9yor/circumventing-interpretability-how-to-defeat-mind-readers 11. Society of mind: https://www.amazon.co.uk/Society-Mind-Marvin-Minsky/dp/0671607405
undefined
20 snips
May 19, 2023 • 1h 41min

E26: [Bonus Episode] Connor Leahy on AGI, GPT-4, and Cognitive Emulation w/ FLI Podcast

[Bonus Episode] Future of Life Institute Podcast host Gus Docker interviews Conjecture CEO Connor Leahy to discuss GPT-4, magic, cognitive emulation, demand for human-like AI, and aligning superintelligence. You can read more about Connor's work at https://conjecture.devFuture of Life Institute is the organization that recently published an open letter calling for a six-month pause on training new AI systems. FLI was founded by Jann Tallinn who we interviewed in Episode 16 of The Cognitive Revolution.We think their podcast is excellent. They frequently interview critical thinkers in AI like Neel Nanda, Ajeya Cotra, and Connor Leahy - an episode we found particularly fascinating and is airing for our audience today.The FLI Podcast also recently interviewed Nathan Labenz for a 2-part episode:https://futureoflife.org/podcast/nathan-labenz-on-how-ai-will-transform-the-economy/SUBSCRIBE: Future of Life Institute Podcast:Apple: https://podcasts.apple.com/us/podcast/future-of-life-institute-podcast/id1170991978Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP--We're hiring across the board at Turpentine and for Erik's personal team on other projects he's incubating. He's hiring a Chief of Staff, EA, Head of Special Projects, Investment Associate, and more. For a list of JDs, check out: eriktorenberg.com.RECOMMENDED PODCAST:The HR industry is at a crossroads. What will it take to construct the next generation of incredible businesses – and where can people leaders have the most business impact? Hosts Nolan Church and Kelli Dragovich have been through it all, the highs and the lows – IPOs, layoffs, executive turnover, board meetings, culture changes, and more. With a lineup of industry vets and experts, Nolan and Kelli break down the nitty-gritty details, trade offs, and dynamics of constructing high performing companies. Through unfiltered conversations that can only happen between seasoned practitioners, Kelli and Nolan dive deep into the kind of leadership-level strategy that often happens behind closed doors. Check out the first episode with the architect of Netflix’s culture deck Patty McCord.https://link.chtbl.com/hrhereticsTIMESTAMPS:(00:00) Episode introduction(01:55) GPT-4 (18:30) "Magic" in machine learning (29:43) Cognitive emulations (40:00) Machine learning VS explainability (49:50) Human data = human AI? (1:01:50) Analogies for cognitive emulations (1:28:10) Demand for human-like AI (1:33:50) Aligning superintelligence SPONSORS:Shopify is the global commerce platform that helps you sell at every stage of your business. Shopify powers 10% of ALL eCommerce in the US. And Shopify's the global force behind Allbirds, Rothy's, and Brooklinen, and 1,000,000s of other entrepreneurs across 175 countries.From their all-in-one e-commerce platform, to their in-person POS system – wherever and whatever you're selling, Shopify's got you covered. With free Shopify Magic, sell more with less effort by whipping up captivating content that converts – from blog posts to product descriptions using AI. Sign up for $1/month trial period: https://shopify.com/cognitiveThis show is produced by Turpentine: a network of podcasts, newsletters, and more, covering technology, business, and culture — all from the perspective of industry insiders and experts. We’re launching new shows every week, and we’re looking for industry-leading sponsors — if you think that might be you and your company, email us at erik@turpentine.co.If you'd like to listen to Part 2 of this interview with Connor Leahy, you can head here: https://podcasts.apple.com/us/podcast/connor-leahy-on-the-state-of-ai-and-alignment-research/id1170991978?i=1000609972001
undefined
18 snips
Jun 21, 2023 • 1h 25min

Will AI destroy civilization in the near future? (with Connor Leahy)

Read the full transcript here. Does AI pose a near-term existential risk? Why might existential risks from AI manifest sooner rather than later? Can't we just turn off any AI that gets out of control? Exactly how much do we understand about what's going on inside neural networks? What is AutoGPT? How feasible is it to build an AI system that's exactly as intelligent as a human but no smarter? What is the "CoEm" AI safety proposal? What steps can the average person take to help mitigate risks from AI?Connor Leahy is CEO and co-founder of Conjecture, an AI alignment company focused on making AI systems boundable and corrigible. Connor founded and led EleutherAI, the largest online community dedicated to LLMs, which acted as a gateway for people interested in ML to upskill and learn about alignment. With capabilities increasing at breakneck speed, and our ability to control AI systems lagging far behind, Connor moved on from the volunteer, open-source Eleuther model to a full-time, closed-source model working to solve alignment via Conjecture. StaffSpencer Greenberg — Host / DirectorJosh Castle — ProducerRyan Kessler — Audio EngineerUri Bram — FactotumWeAmplify — TranscriptionistsMiles Kestran — MarketingMusicBroke for FreeJosh WoodwardLee RosevereQuiet Music for Tiny Robotswowamusiczapsplat.comAffiliatesClearer ThinkingGuidedTrackMind EasePositlyUpLift[Read more]
undefined
18 snips
Jun 20, 2023 • 1h 31min

Joscha Bach and Connor Leahy on AI risk

Support us! https://www.patreon.com/mlst MLST Discord: https://discord.gg/aNPkGUQtc5 Twitter: https://twitter.com/MLStreetTalk The first 10 mins of audio from Joscha isn't great, it improves after. Transcript and longer summary: https://docs.google.com/document/d/1TUJhlSVbrHf2vWoe6p7xL5tlTK_BGZ140QqqTudF8UI/edit?usp=sharing Dr. Joscha Bach argued that general intelligence emerges from civilization, not individuals. Given our biological constraints, humans cannot achieve a high level of general intelligence on our own. Bach believes AGI may become integrated into all parts of the world, including human minds and bodies. He thinks a future where humans and AGI harmoniously coexist is possible if we develop a shared purpose and incentive to align. However, Bach is uncertain about how AI progress will unfold or which scenarios are most likely. Bach argued that global control and regulation of AI is unrealistic. While regulation may address some concerns, it cannot stop continued progress in AI. He believes individuals determine their own values, so "human values" cannot be formally specified and aligned across humanity. For Bach, the possibility of building beneficial AGI is exciting but much work is still needed to ensure a positive outcome. Connor Leahy believes we have more control over the future than the default outcome might suggest. With sufficient time and effort, humanity could develop the technology and coordination to build a beneficial AGI. However, the default outcome likely leads to an undesirable scenario if we do not actively work to build a better future. Leahy thinks finding values and priorities most humans endorse could help align AI, even if individuals disagree on some values. Leahy argued a future where humans and AGI harmoniously coexist is ideal but will require substantial work to achieve. While regulation faces challenges, it remains worth exploring. Leahy believes limits to progress in AI exist but we are unlikely to reach them before humanity is at risk. He worries even modestly superhuman intelligence could disrupt the status quo if misaligned with human values and priorities. Overall, Bach and Leahy expressed optimism about the possibility of building beneficial AGI but believe we must address risks and challenges proactively. They agreed substantial uncertainty remains around how AI will progress and what scenarios are most plausible. But developing a shared purpose between humans and AI, improving coordination and control, and finding human values to help guide progress could all improve the odds of a beneficial outcome. With openness to new ideas and willingness to consider multiple perspectives, continued discussions like this one could help ensure the future of AI is one that benefits and inspires humanity. TOC: 00:00:00 - Introduction and Background 00:02:54 - Different Perspectives on AGI 00:13:59 - The Importance of AGI 00:23:24 - Existential Risks and the Future of Humanity 00:36:21 - Coherence and Coordination in Society 00:40:53 - Possibilities and Future of AGI 00:44:08 - Coherence and alignment 01:08:32 - The role of values in AI alignment 01:18:33 - The future of AGI and merging with AI 01:22:14 - The limits of AI alignment 01:23:06 - The scalability of intelligence 01:26:15 - Closing statements and future prospects
undefined
9 snips
Feb 3, 2024 • 3h

Showdown Between e/acc Leader And Doomer - Connor Leahy + Beff Jezos

The world's second-most famous AI doomer Connor Leahy sits down with Beff Jezos, the founder of the e/acc movement debating technology, AI policy, and human values. As the two discuss technology, AI safety, civilization advancement, and the future of institutions, they clash on their opposing perspectives on how we steer humanity towards a more optimal path. Watch behind the scenes, get early access and join the private Discord by supporting us on Patreon. We have some amazing content going up there with Max Bennett and Kenneth Stanley this week! https://patreon.com/mlst (public discord) https://discord.gg/aNPkGUQtc5 https://twitter.com/MLStreetTalk Post-interview with Beff and Connor: https://www.patreon.com/posts/97905213 Pre-interview with Connor and his colleague Dan Clothiaux: https://www.patreon.com/posts/connor-leahy-and-97631416 Leahy, known for his critical perspectives on AI and technology, challenges Jezos on a variety of assertions related to the accelerationist movement, market dynamics, and the need for regulation in the face of rapid technological advancements. Jezos, on the other hand, provides insights into the e/acc movement's core philosophies, emphasizing growth, adaptability, and the dangers of over-legislation and centralized control in current institutions. Throughout the discussion, both speakers explore the concept of entropy, the role of competition in fostering innovation, and the balance needed to mediate order and chaos to ensure the prosperity and survival of civilization. They weigh up the risks and rewards of AI, the importance of maintaining a power equilibrium in society, and the significance of cultural and institutional dynamism. Beff Jezos (Guillaume Verdon): https://twitter.com/BasedBeffJezos https://twitter.com/GillVerd Connor Leahy: https://twitter.com/npcollapse YT: https://www.youtube.com/watch?v=0zxi0xSBOaQ TOC: 00:00:00 - Intro 00:03:05 - Society library reference 00:03:35 - Debate starts 00:05:08 - Should any tech be banned? 00:20:39 - Leaded Gasoline 00:28:57 - False vacuum collapse method? 00:34:56 - What if there are dangerous aliens? 00:36:56 - Risk tolerances 00:39:26 - Optimizing for growth vs value 00:52:38 - Is vs ought 01:02:29 - AI discussion 01:07:38 - War / global competition 01:11:02 - Open source F16 designs 01:20:37 - Offense vs defense 01:28:49 - Morality / value 01:43:34 - What would Conor do 01:50:36 - Institutions/regulation 02:26:41 - Competition vs. Regulation Dilemma 02:32:50 - Existential Risks and Future Planning 02:41:46 - Conclusion and Reflection Note from Tim: I baked the chapter metadata into the mp3 file this time, does that help the chapters show up in your app? Let me know. Also I accidentally exported a few minutes of dead audio at the end of the file - sorry about that just skip on when the episode finishes.
undefined
6 snips
Nov 29, 2023 • 1h 5min

#158 Connor Leahy: The Unspoken Risks of Centralizing AI Power

Connor Leahy, CEO of Conjecture specializing in AI safety, discusses the risks of centralizing AI power. He highlights the need for widespread governance, controllable AI architectures, and responsible AI development. They also touch upon the challenges of AI ethics, policy and regulation, and the role of governments in AI development.
undefined
5 snips
Aug 4, 2023 • 1h 30min

Can We Develop Truly Beneficial AI? George Hotz and Connor Leahy

Patreon: https://www.patreon.com/mlst Discord: https://discord.gg/ESrGqhf5CB George Hotz and Connor Leahy discuss the crucial challenge of developing beneficial AI that is aligned with human values. Hotz believes truly aligned AI is impossible, while Leahy argues it's a solvable technical challenge.Hotz contends that AI will inevitably pursue power, but distributing AI widely would prevent any single AI from dominating. He advocates open-sourcing AI developments to democratize access. Leahy counters that alignment is necessary to ensure AIs respect human values. Without solving alignment, general AI could ignore or harm humans.They discuss whether AI's tendency to seek power stems from optimization pressure or human-instilled goals. Leahy argues goal-seeking behavior naturally emerges while Hotz believes it reflects human values. Though agreeing on AI's potential dangers, they differ on solutions. Hotz favors accelerating AI progress and distributing capabilities while Leahy wants safeguards put in place.While acknowledging risks like AI-enabled weapons, they debate whether broad access or restrictions better manage threats. Leahy suggests limiting dangerous knowledge, but Hotz insists openness checks government overreach. They concur that coordination and balance of power are key to navigating the AI revolution. Both eagerly anticipate seeing whose ideas prevail as AI progresses. Transcript and notes: https://docs.google.com/document/d/1smkmBY7YqcrhejdbqJOoZHq-59LZVwu-DNdM57IgFcU/edit?usp=sharing Note: this is not a normal episode i.e. the hosts are not part of the debate (and for the record don't agree with Connor or George). TOC: [00:00:00] Introduction to George Hotz and Connor Leahy [00:03:10] George Hotz's Opening Statement: Intelligence and Power [00:08:50] Connor Leahy's Opening Statement: Technical Problem of Alignment and Coordination [00:15:18] George Hotz's Response: Nature of Cooperation and Individual Sovereignty [00:17:32] Discussion on individual sovereignty and defense [00:18:45] Debate on living conditions in America versus Somalia [00:21:57] Talk on the nature of freedom and the aesthetics of life [00:24:02] Discussion on the implications of coordination and conflict in politics [00:33:41] Views on the speed of AI development / hard takeoff [00:35:17] Discussion on potential dangers of AI [00:36:44] Discussion on the effectiveness of current AI [00:40:59] Exploration of potential risks in technology [00:45:01] Discussion on memetic mutation risk [00:52:36] AI alignment and exploitability [00:53:13] Superintelligent AIs and the assumption of good intentions [00:54:52] Humanity’s inconsistency and AI alignment [00:57:57] Stability of the world and the impact of superintelligent AIs [01:02:30] Personal utopia and the limitations of AI alignment [01:05:10] Proposed regulation on limiting the total number of flops [01:06:20] Having access to a powerful AI system [01:18:00] Power dynamics and coordination issues with AI [01:25:44] Humans vs AI in Optimization [01:27:05] The Impact of AI's Power Seeking Behavior [01:29:32] A Debate on the Future of AI
undefined
5 snips
Nov 1, 2020 • 2h 5min

AI Alignment & AGI Fire Alarm - Connor Leahy

This week Dr. Tim Scarfe, Alex Stenlake and Yannic Kilcher speak with AGI and AI alignment specialist Connor Leahy a machine learning engineer from Aleph Alpha and founder of EleutherAI. Connor believes that AI alignment is philosophy with a deadline and that we are on the precipice, the stakes are astronomical. AI is important, and it will go wrong by default. Connor thinks that the singularity or intelligence explosion is near. Connor says that AGI is like climate change but worse, even harder problems, even shorter deadline and even worse consequences for the future. These problems are hard, and nobody knows what to do about them. 00:00:00 Introduction to AI alignment and AGI fire alarm  00:15:16 Main Show Intro  00:18:38 Different schools of thought on AI safety  00:24:03 What is intelligence?  00:25:48 AI Alignment  00:27:39 Humans dont have a coherent utility function  00:28:13 Newcomb's paradox and advanced decision problems  00:34:01 Incentives and behavioural economics  00:37:19 Prisoner's dilemma  00:40:24 Ayn Rand and game theory in politics and business  00:44:04 Instrumental convergence and orthogonality thesis  00:46:14 Utility functions and the Stop button problem  00:55:24 AI corrigibality - self alignment  00:56:16 Decision theory and stability / wireheading / robust delegation  00:59:30 Stop button problem  01:00:40 Making the world a better place  01:03:43 Is intelligence a search problem?  01:04:39 Mesa optimisation / humans are misaligned AI  01:06:04 Inner vs outer alignment / faulty reward functions  01:07:31 Large corporations are intelligent and have no stop function  01:10:21 Dutch booking / what is rationality / decision theory  01:16:32 Understanding very powerful AIs  01:18:03 Kolmogorov complexity  01:19:52 GPT-3 - is it intelligent, are humans even intelligent?  01:28:40 Scaling hypothesis  01:29:30 Connor thought DL was dead in 2017  01:37:54 Why is GPT-3 as intelligent as a human  01:44:43 Jeff Hawkins on intelligence as compression and the great lookup table  01:50:28 AI ethics related to AI alignment?  01:53:26 Interpretability  01:56:27 Regulation  01:57:54 Intelligence explosion  Discord: https://discord.com/invite/vtRgjbM EleutherAI: https://www.eleuther.ai Twitter: https://twitter.com/npcollapse LinkedIn: https://www.linkedin.com/in/connor-j-leahy/