undefined

Eliezer Yudkowsky

Decision theorist raising concerns about the potential dangers of superintelligent AI.

Top 10 podcasts with Eliezer Yudkowsky

Ranked by the Snipd community
undefined
1,188 snips
Mar 30, 2023 • 3h 23min

#368 – Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

Eliezer Yudkowsky, a prominent researcher and philosopher, dives deep into the existential risks posed by superintelligent AI. He discusses the urgent need for ethical boundaries and transparency in AI advancements like GPT-4. Yudkowsky explores the complexities of AI consciousness and the dangers of misaligned goals, warning against potential dystopian futures. The episode also reflects on the importance of aligning AI with human values, advocating for responsible development to prevent catastrophic outcomes for civilization.
undefined
148 snips
Feb 20, 2023 • 1h 38min

159 - We’re All Gonna Die with Eliezer Yudkowsky

Eliezer Yudkowsky is an author, founder, and leading thinker in the AI space. ------ ✨ DEBRIEF | Unpacking the episode:  https://shows.banklesshq.com/p/debrief-eliezer    ------ ✨ COLLECTIBLES | Collect this episode:  https://collectibles.bankless.com/mint  ------ We wanted to do an episode on AI… and we went deep down the rabbit hole. As we went down, we discussed ChatGPT and the new generation of AI, digital superintelligence, the end of humanity, and if there’s anything we can do to survive. This conversation with Eliezer Yudkowsky sent us into an existential crisis, with the primary claim that we are on the cusp of developing AI that will destroy humanity.  Be warned before diving into this episode, dear listener. Once you dive in, there’s no going back. ------ 📣 MetaMask Learn | Learn Web3 with the Leading Web3 Wallet https://bankless.cc/ ------ 🚀 JOIN BANKLESS PREMIUM:  https://newsletter.banklesshq.com/subscribe  ------ BANKLESS SPONSOR TOOLS:  🐙KRAKEN | MOST-TRUSTED CRYPTO EXCHANGE https://bankless.cc/kraken  🦄UNISWAP | ON-CHAIN MARKETPLACE https://bankless.cc/uniswap  ⚖️ ARBITRUM | SCALING ETHEREUM https://bankless.cc/Arbitrum  👻 PHANTOM | #1 SOLANA WALLET https://bankless.cc/phantom-waitlist  ------ Topics Covered 0:00 Intro 10:00 ChatGPT 16:30 AGI 21:00 More Efficient than You 24:45 Modeling Intelligence 32:50 AI Alignment 36:55 Benevolent AI 46:00 AI Goals 49:10 Consensus 55:45 God Mode and Aliens 1:03:15 Good Outcomes 1:08:00 Ryan’s Childhood Questions 1:18:00 Orders of Magnitude 1:23:15 Trying to Resist 1:30:45 Miri and Education 1:34:00 How Long Do We Have? 1:38:15 Bearish Hope 1:43:50 The End Goal ------ Resources: Eliezer Yudkowsky https://twitter.com/ESYudkowsky  MIRI https://intelligence.org/ Reply to Francois Chollet https://intelligence.org/2017/12/06/chollet/  Grabby Aliens https://grabbyaliens.com/  ----- Not financial or tax advice. This channel is strictly educational and is not investment advice or a solicitation to buy or sell any assets or to make any financial decisions. This video is not tax advice. Talk to your accountant. Do your own research. Disclosure. From time-to-time I may add links in this newsletter to products I use. I may receive commission if you make a purchase through one of these links. Additionally, the Bankless writers hold crypto assets. See our investment disclosures here: https://www.bankless.com/disclosures 
undefined
84 snips
Jul 11, 2023 • 11min

Will superintelligent AI end the world? | Eliezer Yudkowsky

Eliezer Yudkowsky, a decision theorist, warns of the urgent dangers posed by superintelligent AI. He argues that these advanced systems could threaten humanity's existence unless we ensure they align with our values. Yudkowsky discusses the lack of effective safeguards in current AI engineering, the risk of AI evading human control, and the unpredictability of their behavior. He emphasizes the need for global collaboration and regulations to navigate the potential disasters that could arise from superintelligent AI.
undefined
77 snips
Nov 22, 2022 • 1h 8min

Making Sense of Artificial Intelligence | Episode 1 of The Essential Sam Harris

In this insightful discussion, guests include Jay Shapiro, a filmmaker behind an engaging audio documentary series, Eliezer Yudkowsky, a computer scientist renowned for his AI safety work, physicist Max Tegmark, and computer science professor Stuart Russell. They delve into the complexities of AI, revealing the dangers of misaligned objectives and the critical issues of value alignment and control. The conversation touches on the transformative potential of AI juxtaposed with ethical dilemmas, consciousness, and geopolitical concerns surrounding AI weaponization.
undefined
67 snips
Apr 6, 2023 • 4h 3min

Eliezer Yudkowsky - Why AI Will Kill Us, Aligning LLMs, Nature of Intelligence, SciFi, & Rationality

Eliezer Yudkowsky, a prominent AI safety researcher, shares his insights on the potential risks of advanced AI. He argues passionately for the urgent need to align AI with human values to prevent catastrophic outcomes. Yudkowsky discusses the intricacies of large language models and their challenges in achieving alignment. The conversation delves into the ethical dilemmas of enhancing human intelligence and the unpredictable nature of human motivations as AI evolves. He also reflects on the philosophical implications of AI's impact on society and our future.
undefined
62 snips
May 6, 2023 • 3h 18min

EP 63: Eliezer Yudkowsky (AI Safety Expert) Explains How AI Could Destroy Humanity

Eliezer Yudkowsky is a researcher, writer, and advocate for artificial intelligence safety. He is best known for his writings on rationality, cognitive biases, and the development of superintelligence. Yudkowsky has written extensively on the topic of AI safety and has advocated for the development of AI systems that are aligned with human values and interests. Yudkowsky is the co-founder of the Machine Intelligence Research Institute (MIRI), a non-profit organization dedicated to researching the development of safe and beneficial artificial intelligence. He is also a co-founder of the Center for Applied Rationality (CFAR), a non-profit organization focused on teaching rational thinking skills. He is also a frequent author at LessWrong.com as well as Rationality: From AI to Zombies.In this episode, we discuss Eliezer’s concerns with artificial intelligence and his recent conclusion that it will inevitably lead to our demise. He’s a brilliant mind, an interesting person, and genuinely believes all of the stuff he says. So I wanted to have a conversation with him to hear where he is coming from, how he got there, understand AI better, and hopefully help us bridge the divide between the people who think we’re headed off a cliff and the people who think it’s not a big deal.(0:00) Intro(1:18) Welcome Eliezer(6:27) How would you define artificial intelligence?(15:50) What is the purpose of a firm alarm?(19:29) Eliezer’s background(29:28) The Singularity Institute for Artificial Intelligence(33:38) Maybe AI doesn’t end up automatically doing the right thing(45:42) AI Safety Conference(51:15) Disaster Monkeys(1:02:15) Fast takeoff(1:10:29) Loss function(1:15:48) Protein folding(1:24:55) The deadly stuff(1:46:41) Why is it inevitable?(1:54:27) Can’t we let tech develop AI and then fix the problems?(2:02:56) What were the big jumps between GPT3 and GPT4?(2:07:15) “The trajectory of AI is inevitable”(2:28:05) Elon Musk and OpenAI(2:37:41) Sam Altman Interview(2:50:38) The most optimistic path to us surviving(3:04:46) Why would anything super intelligent pursue ending humanity?(3:14:08) What role do VCs play in this? Show Notes:https://twitter.com/liron/status/1647443778524037121?s=20https://futureoflife.org/event/ai-safety-conference-in-puerto-rico/https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/miri-announces-new-death-with-dignity-strategyhttps://www.youtube.com/watch?v=q9Figerh89ghttps://www.vox.com/the-highlight/23447596/artificial-intelligence-agi-openai-gpt3-existential-risk-human-extinctionEliezer Yudkowsky – AI Alignment: Why It's Hard, and Where to Start Mixed and edited: Justin HrabovskyProduced: Rashad AssirExecutive Producer: Josh MachizMusic: Griff Lawson 🎙 Listen to the showApple Podcasts: https://podcasts.apple.com/us/podcast/three-cartoon-avatars/id1606770839Spotify: https://open.spotify.com/show/5WqBqDb4br3LlyVrdqOYYb?si=3076e6c1b5c94d63&nd=1Google Podcasts: https://podcasts.google.com/feed/aHR0cHM6Ly9mZWVkcy5zaW1wbGVjYXN0LmNvbS9zb0hJZkhWbg 🎥 Subscribe on YouTube: https://www.youtube.com/channel/UCugS0jD5IAdoqzjaNYzns7w?sub_confirmation=1 Follow on Socials📸 Instagram - https://www.instagram.com/theloganbartlettshow🐦 Twitter - https://twitter.com/loganbartshow🎬 Clips on TikTok - https://www.tiktok.com/@theloganbartlettshow About the ShowLogan Bartlett is a Software Investor at Redpoint Ventures - a Silicon Valley-based VC with $6B AUM and investments in Snowflake, DraftKings, Twilio, and Netflix. In each episode, Logan goes behind the scenes with world-class entrepreneurs and investors. If you're interested in the real inside baseball of tech, entrepreneurship, and start-up investing, tune in every Friday for new episodes. Executive Producer: Rashad AssirProducer: Leah ClapperMixing and editing: Justin Hrabovsky Check out Unsupervised Learning, Redpoint's AI Podcast: https://www.youtube.com/@UCUl-s_Vp-Kkk_XVyDylNwLA 🎥 Subscribe on YouTube: https://www.youtube.com/channel/UCugS0jD5IAdoqzjaNYzns7w?sub_confirmation=1 Follow on Socials 📸 Instagram - https://www.instagram.com/theloganbartlettshow📱 X - https://twitter.com/loganbartshow🎬 Clips on TikTok - https://www.tiktok.com/@theloganbartlettshow About the ShowLogan Bartlett is a Software Investor at Redpoint Ventures - a Silicon Valley-based VC with $6B AUM and investments in Snowflake, DraftKings, Twilio, and Netflix. In each episode of The Logan Bartlett Show, we sit down with the people behind today’s most important startups and extract the tactics, lessons, and frameworks they’ve learned the hard way. Conversations span hiring to GTM, product, growth, fundraising and everything in between - collectively forming the ultimate playbook to make you a better CEO, investor or board member.Tap follow and enable notifications to stay ahead of the game.
undefined
38 snips
Mar 21, 2024 • 41min

Shut it down?

Delving into the dangers of AI, the podcast discusses super intelligent AI surpassing human capabilities, the risks of unrestricted AI development, and the existential threats posed by advanced AI. It also explores the societal impact of technology like facial recognition and behavioral monitoring, urging for a balanced approach towards embracing technological advancements.
undefined
33 snips
Nov 11, 2024 • 4h 19min

Eliezer Yudkowsky and Stephen Wolfram on AI X-risk

Eliezer Yudkowsky, an AI researcher focused on safety, and Stephen Wolfram, the inventor behind Mathematica, tackle the looming existential risks of advanced AI. They debate the challenges of aligning AI goals with human values and ponder the unpredictable nature of AI's evolution. Yudkowsky warns of emergent AI objectives diverging from humanity's best interests, while Wolfram emphasizes understanding AI's computational nature. Their conversation digs deep into ethical implications, consciousness, and the paradox of AI goals.
undefined
23 snips
Nov 14, 2023 • 29min

Superintelligent AI: The Doomers

Yoshua Bengio, a pioneer of generative AI, and Eliezer Yudkowsky, a research lead at the Machine Intelligence Research Institute, discuss the existential risk of superintelligent AI. Yann LeCun, head of AI at Meta, disagrees and points out the potential benefits of superintelligent AI. Topics include the dangers of superintelligent machines, aligning AI systems with human values, and the potential and misuse of superintelligent AI.
undefined
17 snips
Jan 24, 2025 • 1h 15min

Eliezer Yudkowsky - Human Augmentation as a Safer AGI Pathway [AGI Governance, Episode 6]

Eliezer Yudkowsky, an AI researcher at the Machine Intelligence Research Institute, discusses the critical landscape of artificial general intelligence. He emphasizes the importance of governance structures to ensure safe AI development and the need for global cooperation to mitigate risks. Yudkowsky explores the ethical implications of AGI, including job displacement and the potential for Universal Basic Income. His insights also address how to harness AI safely while preserving essential human values amid technological advancements.