The Valmy cover image

The Valmy

Latest episodes

undefined
Jun 14, 2023 • 2h 44min

Carl Shulman (Pt 1) - Intelligence Explosion, Primate Evolution, Robot Doublings, & Alignment

Podcast: Dwarkesh Podcast Episode: Carl Shulman (Pt 1) - Intelligence Explosion, Primate Evolution, Robot Doublings, & AlignmentRelease date: 2023-06-14Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationIn terms of the depth and range of topics, this episode is the best I’ve done.No part of my worldview is the same after talking with Carl Shulman. He's the most interesting intellectual you've never heard of.We ended up talking for 8 hours, so I'm splitting this episode into 2 parts.This part is about Carl’s model of an intelligence explosion, which integrates everything from:* how fast algorithmic progress & hardware improvements in AI are happening,* what primate evolution suggests about the scaling hypothesis,* how soon before AIs could do large parts of AI research themselves, and whether there would be faster and faster doublings of AI researchers,* how quickly robots produced from existing factories could take over the economy.We also discuss the odds of a takeover based on whether the AI is aligned before the intelligence explosion happens, and Carl explains why he’s more optimistic than Eliezer.The next part, which I’ll release next week, is about all the specific mechanisms of an AI takeover, plus a whole bunch of other galaxy brain stuff.Maybe 3 people in the world have thought as rigorously as Carl about so many interesting topics. This was a huge pleasure.Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.Timestamps(00:00:00) - Intro(00:01:32) - Intelligence Explosion(00:18:03) - Can AIs do AI research?(00:39:00) - Primate evolution(01:03:30) - Forecasting AI progress(01:34:20) - After human-level AGI(02:08:39) - AI takeover scenarios Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
undefined
Jun 8, 2023 • 52min

Peter Singer on Utilitarianism, Influence, and Controversial Ideas

Podcast: Conversations with Tyler Episode: Peter Singer on Utilitarianism, Influence, and Controversial IdeasRelease date: 2023-06-07Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationPeter Singer is one of the world’s most influential living philosophers, whose ideas have motivated millions of people to change how they eat, how they give, and how they interact with each other and the natural world. Peter joined Tyler to discuss whether utilitarianism is only tractable at the margin, how Peter thinks about the meat-eater problem, why he might side with aliens over humans, at what margins he would police nature, the utilitarian approach to secularism and abortion, what he’s learned producing the Journal of Controversial Ideas, what he’d change about the current Effective Altruism movement, where Derek Parfit went wrong, to what extent we should respect the wishes of the dead, why professional philosophy is so boring, his advice on how to enjoy our lives, what he’ll be doing after retiring from teaching, and more. Read a full transcript enhanced with helpful links, or watch the full video.  Recorded May 25th, 2023 Other ways to connect Follow us on Twitter and Instagram Follow Tyler on Twitter Follow Peter on Twitter Email us: cowenconvos@mercatus.gmu.edu Learn more about Conversations with Tyler and other Mercatus Center podcasts here. Photo credit: Katarzyna de Lazari-Radek
undefined
Jun 8, 2023 • 3h 27min

#152 – Joe Carlsmith on navigating serious philosophical confusion

Podcast: 80,000 Hours Podcast Episode: #152 – Joe Carlsmith on navigating serious philosophical confusionRelease date: 2023-05-19Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationWhat is the nature of the universe? How do we make decisions correctly? What differentiates right actions from wrong ones?Such fundamental questions have been the subject of philosophical and theological debates for millennia. But, as we all know, and surveys of expert opinion make clear, we are very far from agreement. So... with these most basic questions unresolved, what’s a species to do?In today's episode, philosopher Joe Carlsmith — Senior Research Analyst at Open Philanthropy — makes the case that many current debates in philosophy ought to leave us confused and humbled. These are themes he discusses in his PhD thesis, A stranger priority? Topics at the outer reaches of effective altruism.Links to learn more, summary and full transcript.To help transmit the disorientation he thinks is appropriate, Joe presents three disconcerting theories — originating from him and his peers — that challenge humanity's self-assured understanding of the world.The first idea is that we might be living in a computer simulation, because, in the classic formulation, if most civilisations go on to run many computer simulations of their past history, then most beings who perceive themselves as living in such a history must themselves be in computer simulations. Joe prefers a somewhat different way of making the point, but, having looked into it, he hasn't identified any particular rebuttal to this 'simulation argument.'If true, it could revolutionise our comprehension of the universe and the way we ought to live...Other two ideas cut for length — click here to read the full post.These are just three particular instances of a much broader set of ideas that some have dubbed the "train to crazy town." Basically, if you commit to always take philosophy and arguments seriously, and try to act on them, it can lead to what seem like some pretty crazy and impractical places. So what should we do with this buffet of plausible-sounding but bewildering arguments?Joe and Rob discuss to what extent this should prompt us to pay less attention to philosophy, and how we as individuals can cope psychologically with feeling out of our depth just trying to make the most basic sense of the world.In today's challenging conversation, Joe and Rob discuss all of the above, as well as:What Joe doesn't like about the drowning child thought experimentAn alternative thought experiment about helping a stranger that might better highlight our intrinsic desire to help othersWhat Joe doesn't like about the expression “the train to crazy town”Whether Elon Musk should place a higher probability on living in a simulation than most other peopleWhether the deterministic twin prisoner’s dilemma, if fully appreciated, gives us an extra reason to keep promisesTo what extent learning to doubt our own judgement about difficult questions -- so-called “epistemic learned helplessness” -- is a good thingHow strong the case is that advanced AI will engage in generalised power-seeking behaviourChapters:Rob’s intro (00:00:00)The interview begins (00:09:21)Downsides of the drowning child thought experiment (00:12:24)Making demanding moral values more resonant (00:24:56)The crazy train (00:36:48)Whether we’re living in a simulation (00:48:50)Reasons to doubt we’re living in a simulation, and practical implications if we are (00:57:02)Rob's explainer about anthropics (01:12:27)Back to the interview (01:19:53)Decision theory and affecting the past (01:23:33)Rob's explainer about decision theory (01:29:19)Back to the interview (01:39:55)Newcomb's problem (01:46:14)Practical implications of acausal decision theory (01:50:04)The hitchhiker in the desert (01:55:57)Acceptance within philosophy (02:01:22)Infinite ethics (02:04:35)Rob's explainer about the expanding spheres approach (02:17:05)Back to the interview (02:20:27)Infinite ethics and the utilitarian dream (02:27:42)Rob's explainer about epicycles (02:29:30)Back to the interview (02:31:26)What to do with all of these weird philosophical ideas (02:35:28)Welfare longtermism and wisdom longtermism (02:53:23)Epistemic learned helplessness (03:03:10)Power-seeking AI (03:12:41)Rob’s outro (03:25:45)Producer: Keiran HarrisAudio mastering: Milo McGuire and Ben CordellTranscriptions: Katy Moore
undefined
Jun 7, 2023 • 2h 35min

Jeff Hawkins (Thousand Brains Theory)

Podcast: Machine Learning Street Talk (MLST) Episode: #59 - Jeff Hawkins (Thousand Brains Theory)Release date: 2021-09-03Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationPatreon: https://www.patreon.com/mlst The ultimate goal of neuroscience is to learn how the human brain gives rise to human intelligence and what it means to be intelligent. Understanding how the brain works is considered one of humanity’s greatest challenges.  Jeff Hawkins thinks that the reality we perceive is a kind of simulation, a hallucination, a confabulation. He thinks that our brains are a model reality based on thousands of information streams originating from the sensors in our body.  Critically - Hawkins doesn’t think there is just one model but rather; thousands.  Jeff has just released his new book, A thousand brains: a new theory of intelligence. It’s an inspiring and well-written book and I hope after watching this show; you will be inspired to read it too.  https://numenta.com/a-thousand-brains-by-jeff-hawkins/ https://numenta.com/blog/2019/01/16/the-thousand-brains-theory-of-intelligence/ Panel: Dr. Keith Duggar https://twitter.com/DoctorDuggar Connor Leahy https://twitter.com/npcollapse
undefined
May 14, 2023 • 2h 58min

#63 – Ben Garfinkel on AI Governance

Podcast: Hear This Idea Episode: #63 – Ben Garfinkel on AI GovernanceRelease date: 2023-05-13Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationBen Garfinkel is a Research Fellow at the University of Oxford and Acting Director of the Centre for the Governance of AI. In this episode we talk about: An overview of AI governance space, and disentangling concrete research questions that Ben would like to see more work on Seeing how existing arguments for the risks from transformative AI have held up and Ben’s personal motivations for working on global risks from AI GovAI’s own work and opportunities for listeners to get involved Further reading and a transcript is available on our website: hearthisidea.com/episodes/garfinkel If you have any feedback, you can get a free book for filling out our new feedback form. You can also get in touch through our website or on Twitter. Consider leaving us a review wherever you're listening to this — it's the best free way to support the show. Thanks for listening!
undefined
May 11, 2023 • 2h 17min

#299 – Demis Hassabis: DeepMind

Podcast: Lex Fridman Podcast Episode: #299 – Demis Hassabis: DeepMindRelease date: 2022-07-01Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationDemis Hassabis is the CEO and co-founder of DeepMind. Please support this podcast by checking out our sponsors: – Mailgun: https://lexfridman.com/mailgun – InsideTracker: https://insidetracker.com/lex to get 20% off – Onnit: https://lexfridman.com/onnit to get up to 10% off – Indeed: https://indeed.com/lex to get $75 credit – Magic Spoon: https://magicspoon.com/lex and use code LEX to get $5 off EPISODE LINKS: Demis’s Twitter: https://twitter.com/demishassabis DeepMind’s Twitter: https://twitter.com/DeepMind DeepMind’s Instagram: https://instagram.com/deepmind DeepMind’s Website: https://deepmind.com Plasma control paper: https://nature.com/articles/s41586-021-04301-9 Quantum simulation paper: https://science.org/doi/10.1126/science.abj6511 The Emperor’s New Mind (book): https://amzn.to/3bx03lo Life Ascending (book): https://amzn.to/3AhUP7z PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: – Check out the sponsors above, it’s the best way to support this podcast – Support on Patreon: https://www.patreon.com/lexfridman – Twitter: https://twitter.com/lexfridman – Instagram: https://www.instagram.com/lexfridman – LinkedIn: https://www.linkedin.com/in/lexfridman – Facebook: https://www.facebook.com/lexfridman – Medium: https://medium.com/@lexfridman OUTLINE: Here’s the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) – Introduction (07:17) – Turing Test (14:43) – Video games (36:18) – Simulation (38:29) – Consciousness (43:29) – AlphaFold (57:09) – Solving intelligence (1:09:28) – Open sourcing AlphaFold & MuJoCo (1:19:34) – Nuclear fusion (1:23:38) – Quantum simulation (1:26:46) – Physics (1:30:13) – Origin of life (1:34:52) – Aliens (1:42:59) – Intelligent life (1:46:08) – Conscious AI (1:59:23) – Power (2:03:53) – Advice for young people (2:11:59) – Meaning of life
undefined
May 7, 2023 • 3h 2min

#150 – Tom Davidson on how quickly AI could transform the world

Podcast: 80,000 Hours Podcast Episode: #150 – Tom Davidson on how quickly AI could transform the worldRelease date: 2023-05-05Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationIt’s easy to dismiss alarming AI-related predictions when you don’t know where the numbers came from.For example: what if we told you that within 15 years, it’s likely that we’ll see a 1,000x improvement in AI capabilities in a single year? And what if we then told you that those improvements would lead to explosive economic growth unlike anything humanity has seen before?You might think, “Congratulations, you said a big number — but this kind of stuff seems crazy, so I’m going to keep scrolling through Twitter.”But this 1,000x yearly improvement is a prediction based on *real economic models* created by today’s guest Tom Davidson, Senior Research Analyst at Open Philanthropy. By the end of the episode, you’ll either be able to point out specific flaws in his step-by-step reasoning, or have to at least consider the idea that the world is about to get — at a minimum — incredibly weird.Links to learn more, summary and full transcript. As a teaser, consider the following: Developing artificial general intelligence (AGI) — AI that can do 100% of cognitive tasks at least as well as the best humans can — could very easily lead us to an unrecognisable world. You might think having to train AI systems individually to do every conceivable cognitive task — one for diagnosing diseases, one for doing your taxes, one for teaching your kids, etc. — sounds implausible, or at least like it’ll take decades. But Tom thinks we might not need to train AI to do every single job — we might just need to train it to do one: AI research. And building AI capable of doing research and development might be a much easier task — especially given that the researchers training the AI are AI researchers themselves. And once an AI system is as good at accelerating future AI progress as the best humans are today — and we can run billions of copies of it round the clock — it’s hard to make the case that we won’t achieve AGI very quickly. To give you some perspective: 17 years ago we saw the launch of Twitter, the release of Al Gore's *An Inconvenient Truth*, and your first chance to play the Nintendo Wii. Tom thinks that if we have AI that significantly accelerates AI R&D, then it’s hard to imagine not having AGI 17 years from now. Wild. Host Luisa Rodriguez gets Tom to walk us through his careful reports on the topic, and how he came up with these numbers, across a terrifying but fascinating three hours. Luisa and Tom also discuss: • How we might go from GPT-4 to AI disaster • Tom’s journey from finding AI risk to be kind of scary to really scary • Whether international cooperation or an anti-AI social movement can slow AI progress down • Why it might take just a few years to go from pretty good AI to superhuman AI • How quickly the number and quality of computer chips we’ve been using for AI have been increasing • The pace of algorithmic progress • What ants can teach us about AI • And much more Chapters:Rob’s intro (00:00:00)The interview begins (00:04:53)How we might go from GPT-4 to disaster (00:13:50)Explosive economic growth (00:24:15)Are there any limits for AI scientists? (00:33:17)This seems really crazy (00:44:16)How is this going to go for humanity? (00:50:49)Why AI won’t go the way of nuclear power (01:00:13)Can we definitely not come up with an international treaty? (01:05:24)How quickly we should expect AI to “take off” (01:08:41)Tom’s report on AI takeoff speeds (01:22:28)How quickly will we go from 20% to 100% of tasks being automated by AI systems? (01:28:34)What percent of cognitive tasks AI can currently perform (01:34:27)Compute (01:39:48)Using effective compute to predict AI takeoff speeds (01:48:01)How quickly effective compute might increase (02:00:59)How quickly chips and algorithms might improve (02:12:31)How to check whether large AI models have dangerous capabilities (02:21:22)Reasons AI takeoff might take longer (02:28:39)Why AI takeoff might be very fast (02:31:52)Fast AI takeoff speeds probably means shorter AI timelines (02:34:44)Going from human-level AI to superhuman AI (02:41:34)Going from AGI to AI deployment (02:46:59)Were these arguments ever far-fetched to Tom? (02:49:54)What ants can teach us about AI (02:52:45)Rob’s outro (03:00:32)Producer: Keiran HarrisAudio mastering: Simon Monsour and Ben CordellTranscriptions: Katy Moore
undefined
May 2, 2023 • 1h 50min

168 - How to Solve AI Alignment with Paul Christiano

Podcast: Bankless Episode: 168 - How to Solve AI Alignment with Paul ChristianoRelease date: 2023-04-24Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationPaul Christiano runs the Alignment Research Center, a non-profit research organization whose mission is to align future machine learning systems with human interests. Paul previously ran the language model alignment team at OpenAI, the creators of ChatGPT.  Today, we’re hoping to explore the solution-landscape to the AI Alignment problem, and hoping Paul can guide us on that journey.  ------ ✨ DEBRIEF | Unpacking the episode:  https://www.bankless.com/debrief-paul-christiano    ------ ✨ COLLECTIBLES | Collect this episode:  https://collectibles.bankless.com/mint  ------ ✨ Always wanted to become a Token Analyst? Bankless Citizens get exclusive access to Token Hub. Join Them. https://bankless.cc/TokenHubRSS   ------ In today’s episode, Paul answers many questions, but the overarching ones are:  1) How BIG is the AI Alignment problem?  2) How HARD is the AI Alighment problem? 3) How SOLVABLE is the AI Alignment problem?  Does humanity have a chance? Tune in to hear Paul’s thoughts.  ------ BANKLESS SPONSOR TOOLS:  ⚖️ ARBITRUM | SCALING ETHEREUM https://bankless.cc/Arbitrum  🐙KRAKEN | MOST-TRUSTED CRYPTO EXCHANGE https://bankless.cc/kraken  🦄UNISWAP | ON-CHAIN MARKETPLACE https://bankless.cc/uniswap  👻 PHANTOM | FRIENDLY MULTICHAIN WALLET https://bankless.cc/phantom-waitlist  🦊METAMASK LEARN | HELPFUL WEB3 RESOURCE https://bankless.cc/MetaMask  ------ Topics Covered 0:00 Intro 9:20 Percentage Likelihood of Death by AI 11:24 Timing  19:15 Chimps to Human Jump 21:55 Thoughts on ChatGPT 27:51 LLMs & AGI 32:49 Time to React? 38:29 AI Takeover  41:51 AI Agency  49:35 Loopholes  51:14 Training AIs to Be Honest  58:00 Psychology  59:36 How Solvable Is the AI Alignment Problem? 1:03:48 The Technical Solutions (Scalable Oversight)  1:16:14 Training AIs to be Bad?!  1:18:22 More Solutions 1:21:36 Stabby AIs  1:26:03 Public vs. Private (Lab) AIs 1:28:31 Inside Neural Nets 1:32:11 4th Solution  1:35:00 Manpower & Funding  1:38:15 Pause AI? 1:43:29 Resources & Education on AI Safety  1:46:13 Talent   1:49:00 Paul’s Day Job 1:50:15 Nobel Prize  1:52:35 Treating AIs with Respect  1:53:41 Uptopia Scenario  1:55:50 Closing & Disclaimers  ------ Resources: Alignment Research Center https://www.alignment.org/  Paul Christiano’s Website https://paulfchristiano.com/ai/  ----- Not financial or tax advice. This channel is strictly educational and is not investment advice or a solicitation to buy or sell any assets or to make any financial decisions. This video is not tax advice. Talk to your accountant. Do your own research. Disclosure. From time-to-time I may add links in this newsletter to products I use. I may receive commission if you make a purchase through one of these links. Additionally, the Bankless writers hold crypto assets. See our investment disclosures here: https://www.bankless.com/disclosures 
undefined
Mar 28, 2023 • 48min

Ilya Sutskever (OpenAI Chief Scientist) - Building AGI, Alignment, Future Models, Spies, Microsoft, Taiwan, & Enlightenment

Podcast: Dwarkesh Podcast Episode: Ilya Sutskever (OpenAI Chief Scientist) - Building AGI, Alignment, Future Models, Spies, Microsoft, Taiwan, & EnlightenmentRelease date: 2023-03-27Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationI went over to the OpenAI offices in San Fransisco to ask the Chief Scientist and cofounder of OpenAI, Ilya Sutskever, about:* time to AGI* leaks and spies* what's after generative models* post AGI futures* working with Microsoft and competing with Google* difficulty of aligning superhuman AIWatch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.Timestamps(00:00) - Time to AGI(05:57) - What’s after generative models?(10:57) - Data, models, and research(15:27) - Alignment(20:53) - Post AGI Future(26:56) - New ideas are overrated(36:22) - Is progress inevitable?(41:27) - Future Breakthroughs Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
undefined
Mar 24, 2023 • 53min

Tom Holland on History, Christianity, and the Value of the Countryside

Podcast: Conversations with Tyler Episode: Tom Holland on History, Christianity, and the Value of the CountrysideRelease date: 2023-03-22Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationHistorian Tom Holland joined Tyler to discuss in what ways his Christianity is influenced by Lord Byron, how the Book of Revelation precipitated a revolutionary tradition, which book of the Bible is most foundational for Western liberalism, the political differences between Paul and Jesus, why America is more pro-technology than Europe, why Herodotus is his favorite writer, why the Greeks and Persians didn’t industrialize despite having advanced technology, how he feels about devolution in the United Kingdom and the potential of Irish unification, what existential problem the Church of England faces, how the music of Ennio Morricone helps him write for a popular audience, why Jurassic Park is his favorite movie, and more. Read a full transcript enhanced with helpful links, or watch the full video.  Recorded February 1st, 2023 Other ways to connect Follow us on Twitter and Instagram Follow Tyler on Twitter Follow Tom on Twitter Email us: cowenconvos@mercatus.gmu.edu Learn more about Conversations with Tyler and other Mercatus Center podcasts here. Photo credit: Sadie Holland

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner