undefined

Eliezer Yudkowsky

Research fellow at the Singularity Institute for Artificial Intelligence. A leading researcher and writer on artificial intelligence and the potential risks and benefits of advanced AI.

Top 10 podcasts with Eliezer Yudkowsky

Ranked by the Snipd community
undefined
1,099 snips
Mar 30, 2023 • 3h 23min

#368 – Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

Eliezer Yudkowsky is a researcher, writer, and philosopher on the topic of superintelligent AI. Please support this podcast by checking out our sponsors: – Linode: https://linode.com/lex to get $100 free credit – House of Macadamias: https://houseofmacadamias.com/lex and use code LEX to get 20% off your first order – InsideTracker: https://insidetracker.com/lex to get 20% off EPISODE LINKS: Eliezer’s Twitter: https://twitter.com/ESYudkowsky LessWrong Blog: https://lesswrong.com Eliezer’s Blog page: https://www.lesswrong.com/users/eliezer_yudkowsky Books and resources mentioned: 1. AGI Ruin (blog post): https://lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities 2. Adaptation and Natural Selection: https://amzn.to/40F5gfa PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: – Check out the sponsors above, it’s the best way to support this podcast – Support on Patreon: https://www.patreon.com/lexfridman – Twitter: https://twitter.com/lexfridman – Instagram: https://www.instagram.com/lexfridman – LinkedIn: https://www.linkedin.com/in/lexfridman – Facebook: https://www.facebook.com/lexfridman – Medium: https://medium.com/@lexfridman OUTLINE: Here’s the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) – Introduction (05:19) – GPT-4 (28:00) – Open sourcing GPT-4 (44:18) – Defining AGI (52:14) – AGI alignment (1:35:06) – How AGI may kill us (2:27:27) – Superintelligence (2:34:39) – Evolution (2:41:09) – Consciousness (2:51:41) – Aliens (2:57:12) – AGI Timeline (3:05:11) – Ego (3:11:03) – Advice for young people (3:16:21) – Mortality (3:18:02) – Love
undefined
149 snips
Feb 20, 2023 • 1h 38min

159 - We’re All Gonna Die with Eliezer Yudkowsky

Eliezer Yudkowsky is an author, founder, and leading thinker in the AI space. ------ ✨ DEBRIEF | Unpacking the episode:  https://shows.banklesshq.com/p/debrief-eliezer    ------ ✨ COLLECTIBLES | Collect this episode:  https://collectibles.bankless.com/mint  ------ We wanted to do an episode on AI… and we went deep down the rabbit hole. As we went down, we discussed ChatGPT and the new generation of AI, digital superintelligence, the end of humanity, and if there’s anything we can do to survive. This conversation with Eliezer Yudkowsky sent us into an existential crisis, with the primary claim that we are on the cusp of developing AI that will destroy humanity.  Be warned before diving into this episode, dear listener. Once you dive in, there’s no going back. ------ 📣 MetaMask Learn | Learn Web3 with the Leading Web3 Wallet https://bankless.cc/ ------ 🚀 JOIN BANKLESS PREMIUM:  https://newsletter.banklesshq.com/subscribe  ------ BANKLESS SPONSOR TOOLS:  🐙KRAKEN | MOST-TRUSTED CRYPTO EXCHANGE https://bankless.cc/kraken  🦄UNISWAP | ON-CHAIN MARKETPLACE https://bankless.cc/uniswap  ⚖️ ARBITRUM | SCALING ETHEREUM https://bankless.cc/Arbitrum  👻 PHANTOM | #1 SOLANA WALLET https://bankless.cc/phantom-waitlist  ------ Topics Covered 0:00 Intro 10:00 ChatGPT 16:30 AGI 21:00 More Efficient than You 24:45 Modeling Intelligence 32:50 AI Alignment 36:55 Benevolent AI 46:00 AI Goals 49:10 Consensus 55:45 God Mode and Aliens 1:03:15 Good Outcomes 1:08:00 Ryan’s Childhood Questions 1:18:00 Orders of Magnitude 1:23:15 Trying to Resist 1:30:45 Miri and Education 1:34:00 How Long Do We Have? 1:38:15 Bearish Hope 1:43:50 The End Goal ------ Resources: Eliezer Yudkowsky https://twitter.com/ESYudkowsky  MIRI https://intelligence.org/ Reply to Francois Chollet https://intelligence.org/2017/12/06/chollet/  Grabby Aliens https://grabbyaliens.com/  ----- Not financial or tax advice. This channel is strictly educational and is not investment advice or a solicitation to buy or sell any assets or to make any financial decisions. This video is not tax advice. Talk to your accountant. Do your own research. Disclosure. From time-to-time I may add links in this newsletter to products I use. I may receive commission if you make a purchase through one of these links. Additionally, the Bankless writers hold crypto assets. See our investment disclosures here: https://www.bankless.com/disclosures 
undefined
77 snips
Nov 22, 2022 • 1h 8min

Making Sense of Artificial Intelligence | Episode 1 of The Essential Sam Harris

Filmmaker Jay Shapiro has produced a new series of audio documentaries, exploring the major topics that Sam has focused on over the course of his career. Each episode weaves together original analysis, critical perspective, and novel thought experiments with some of the most compelling exchanges from the Making Sense archive. Whether you are new to a particular topic, or think you have your mind made up about it, we think you’ll find this series fascinating. In this episode, we explore the landscape of Artificial Intelligence. We’ll listen in on Sam’s conversation with decision theorist and artificial-intelligence researcher Eliezer Yudkowsky, as we consider the potential dangers of AI – including the control problem and the value-alignment problem – as well as the concepts of Artificial General Intelligence, Narrow Artificial Intelligence, and Artificial Super Intelligence. We’ll then be introduced to philosopher Nick Bostrom’s “Genies, Sovereigns, Oracles, and Tools,” as physicist Max Tegmark outlines just how careful we need to be as we travel down the AI path. Computer scientist Stuart Russell will then dig deeper into the value-alignment problem and explain its importance.   We’ll hear from former Google CEO Eric Schmidt about the geopolitical realities of AI terrorism and weaponization. We’ll then touch the topic of consciousness as Sam and psychologist Paul Bloom turn the conversation to the ethical and psychological complexities of living alongside humanlike AI. Psychologist Alison Gopnik then reframes the general concept of intelligence to help us wonder if the kinds of systems we’re building using “Deep Learning” are really marching us towards our super-intelligent overlords.   Finally, physicist David Deutsch will argue that many value-alignment fears about AI are based on a fundamental misunderstanding about how knowledge actually grows in this universe.
undefined
67 snips
Apr 6, 2023 • 4h 3min

Eliezer Yudkowsky - Why AI Will Kill Us, Aligning LLMs, Nature of Intelligence, SciFi, & Rationality

For 4 hours, I tried to come up reasons for why AI might not kill us all, and Eliezer Yudkowsky explained why I was wrong.We also discuss his call to halt AI, why LLMs make alignment harder, what it would take to save humanity, his millions of words of sci-fi, and much more.If you want to get to the crux of the conversation, fast forward to 2:35:00 through 3:43:54. Here we go through and debate the main reasons I still think doom is unlikely.Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.Timestamps(0:00:00) - TIME article(0:09:06) - Are humans aligned?(0:37:35) - Large language models(1:07:15) - Can AIs help with alignment?(1:30:17) - Society’s response to AI(1:44:42) - Predictions (or lack thereof)(1:56:55) - Being Eliezer(2:13:06) - Othogonality(2:35:00) - Could alignment be easier than we think?(3:02:15) - What will AIs want?(3:43:54) - Writing fiction & whether rationality helps you win Get full access to Dwarkesh Podcast at www.dwarkeshpatel.com/subscribe
undefined
62 snips
May 6, 2023 • 3h 18min

EP 63: Eliezer Yudkowsky (AI Safety Expert) Explains How AI Could Destroy Humanity

Eliezer Yudkowsky is a researcher, writer, and advocate for artificial intelligence safety. He is best known for his writings on rationality, cognitive biases, and the development of superintelligence. Yudkowsky has written extensively on the topic of AI safety and has advocated for the development of AI systems that are aligned with human values and interests. Yudkowsky is the co-founder of the Machine Intelligence Research Institute (MIRI), a non-profit organization dedicated to researching the development of safe and beneficial artificial intelligence. He is also a co-founder of the Center for Applied Rationality (CFAR), a non-profit organization focused on teaching rational thinking skills. He is also a frequent author at LessWrong.com as well as Rationality: From AI to Zombies.In this episode, we discuss Eliezer’s concerns with artificial intelligence and his recent conclusion that it will inevitably lead to our demise. He’s a brilliant mind, an interesting person, and genuinely believes all of the stuff he says. So I wanted to have a conversation with him to hear where he is coming from, how he got there, understand AI better, and hopefully help us bridge the divide between the people who think we’re headed off a cliff and the people who think it’s not a big deal.(0:00) Intro(1:18) Welcome Eliezer(6:27) How would you define artificial intelligence?(15:50) What is the purpose of a firm alarm?(19:29) Eliezer’s background(29:28) The Singularity Institute for Artificial Intelligence(33:38) Maybe AI doesn’t end up automatically doing the right thing(45:42) AI Safety Conference(51:15) Disaster Monkeys(1:02:15) Fast takeoff(1:10:29) Loss function(1:15:48) Protein folding(1:24:55) The deadly stuff(1:46:41) Why is it inevitable?(1:54:27) Can’t we let tech develop AI and then fix the problems?(2:02:56) What were the big jumps between GPT3 and GPT4?(2:07:15) “The trajectory of AI is inevitable”(2:28:05) Elon Musk and OpenAI(2:37:41) Sam Altman Interview(2:50:38) The most optimistic path to us surviving(3:04:46) Why would anything super intelligent pursue ending humanity?(3:14:08) What role do VCs play in this? Show Notes:https://twitter.com/liron/status/1647443778524037121?s=20https://futureoflife.org/event/ai-safety-conference-in-puerto-rico/https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/miri-announces-new-death-with-dignity-strategyhttps://www.youtube.com/watch?v=q9Figerh89ghttps://www.vox.com/the-highlight/23447596/artificial-intelligence-agi-openai-gpt3-existential-risk-human-extinctionEliezer Yudkowsky – AI Alignment: Why It's Hard, and Where to Start Mixed and edited: Justin HrabovskyProduced: Rashad AssirExecutive Producer: Josh MachizMusic: Griff Lawson 🎙 Listen to the showApple Podcasts: https://podcasts.apple.com/us/podcast/three-cartoon-avatars/id1606770839Spotify: https://open.spotify.com/show/5WqBqDb4br3LlyVrdqOYYb?si=3076e6c1b5c94d63&nd=1Google Podcasts: https://podcasts.google.com/feed/aHR0cHM6Ly9mZWVkcy5zaW1wbGVjYXN0LmNvbS9zb0hJZkhWbg 🎥 Subscribe on YouTube: https://www.youtube.com/channel/UCugS0jD5IAdoqzjaNYzns7w?sub_confirmation=1 Follow on Socials📸 Instagram - https://www.instagram.com/theloganbartlettshow🐦 Twitter - https://twitter.com/loganbartshow🎬 Clips on TikTok - https://www.tiktok.com/@theloganbartlettshow About the ShowLogan Bartlett is a Software Investor at Redpoint Ventures - a Silicon Valley-based VC with $6B AUM and investments in Snowflake, DraftKings, Twilio, and Netflix. In each episode, Logan goes behind the scenes with world-class entrepreneurs and investors. If you're interested in the real inside baseball of tech, entrepreneurship, and start-up investing, tune in every Friday for new episodes. Executive Producer: Rashad AssirProducer: Leah ClapperMixing and editing: Justin Hrabovsky Check out Unsupervised Learning, Redpoint's AI Podcast: https://www.youtube.com/@UCUl-s_Vp-Kkk_XVyDylNwLA 🎙 Listen to the show Apple Podcasts: https://podcasts.apple.com/us/podcast/the-logan-bartlett-show/id1606770839Spotify: https://open.spotify.com/show/5WqBqDb4br3LlyVrdqOYYb?si=3076e6c1b5c94d63&nd=1Google Podcasts: https://podcasts.google.com/feed/aHR0cHM6Ly9mZWVkcy5zaW1wbGVjYXN0LmNvbS9zb0hJZkhWbg 🎥 Subscribe on YouTube: https://www.youtube.com/channel/UCugS0jD5IAdoqzjaNYzns7w?sub_confirmation=1 Follow on Socials 📸 Instagram - https://www.instagram.com/theloganbartlettshow🐦 Twitter - https://twitter.com/loganbartshow🎬 Clips on TikTok - https://www.tiktok.com/@theloganbartlettshow About the Show Logan Bartlett is a Software Investor at Redpoint Ventures - a Silicon Valley-based VC with $6B AUM and investments in Snowflake, DraftKings, Twilio, and Netflix. In each episode, Logan goes behind the scenes with world-class entrepreneurs and investors. If you're interested in the real inside baseball of tech, entrepreneurship, and start-up investing, tune in every Friday for new episodes.
undefined
36 snips
Mar 21, 2024 • 41min

Shut it down?

Delving into the dangers of AI, the podcast discusses super intelligent AI surpassing human capabilities, the risks of unrestricted AI development, and the existential threats posed by advanced AI. It also explores the societal impact of technology like facial recognition and behavioral monitoring, urging for a balanced approach towards embracing technological advancements.
undefined
29 snips
Nov 11, 2024 • 4h 19min

Eliezer Yudkowsky and Stephen Wolfram on AI X-risk

Eliezer Yudkowsky, an AI researcher focused on safety, and Stephen Wolfram, the inventor behind Mathematica, tackle the looming existential risks of advanced AI. They debate the challenges of aligning AI goals with human values and ponder the unpredictable nature of AI's evolution. Yudkowsky warns of emergent AI objectives diverging from humanity's best interests, while Wolfram emphasizes understanding AI's computational nature. Their conversation digs deep into ethical implications, consciousness, and the paradox of AI goals.
undefined
23 snips
Nov 14, 2023 • 29min

Superintelligent AI: The Doomers

Yoshua Bengio, a pioneer of generative AI, and Eliezer Yudkowsky, a research lead at the Machine Intelligence Research Institute, discuss the existential risk of superintelligent AI. Yann LeCun, head of AI at Meta, disagrees and points out the potential benefits of superintelligent AI. Topics include the dangers of superintelligent machines, aligning AI systems with human values, and the potential and misuse of superintelligent AI.
undefined
15 snips
Nov 22, 2022 • 2h 12min

Making Sense of Artificial Intelligence

Filmmaker Jay Shapiro has produced a new series of audio documentaries, exploring the major topics that Sam has focused on over the course of his career. Each episode weaves together original analysis, critical perspective, and novel thought experiments with some of the most compelling exchanges from the Making Sense archive. Whether you are new to a particular topic, or think you have your mind made up about it, we think you’ll find this series fascinating. And make sure to stick around for the end of each episode, where we provide our list of recommendations from the worlds of film, television, literature, music, and art.   In this episode, we explore the landscape of Artificial Intelligence. We’ll listen in on Sam’s conversation with decision theorist and artificial-intelligence researcher Eliezer Yudkowsky, as we consider the potential dangers of AI — including the control problem and the value-alignment problem — as well as the concepts of Artificial General Intelligence, Narrow Artificial Intelligence, and Artificial Super Intelligence. We’ll then be introduced to philosopher Nick Bostrom’s “Genies, Sovereigns, Oracles, and Tools,” as physicist Max Tegmark outlines just how careful we need to be as we travel down the AI path. Computer scientist Stuart Russell will then dig deeper into the value-alignment problem and explain its importance. We’ll hear from former Google CEO Eric Schmidt about the geopolitical realities of AI terrorism and weaponization. We’ll then touch the topic of consciousness as Sam and psychologist Paul Bloom turn the conversation to the ethical and psychological complexities of living alongside humanlike AI. Psychologist Alison Gopnik then reframes the general concept of intelligence to help us wonder if the kinds of systems we’re building using “Deep Learning” are really marching us towards our super-intelligent overlords. Finally, physicist David Deutsch will argue that many value-alignment fears about AI are based on a fundamental misunderstanding about how knowledge actually grows in this universe.
undefined
10 snips
May 8, 2023 • 1h 18min

Eliezer Yudkowsky on the Dangers of AI

Eliezer Yudkowsky insists that once artificial intelligence becomes smarter than people, everyone on earth will die. Listen as Yudkowsky speaks with EconTalk's Russ Roberts on why we should be very, very afraid, and why we're not prepared or able to manage the terrifiying risks of artificial intelligence.