

The Valmy
Peter Hartree
https://thevalmy.com/
Episodes
Mentioned books

May 11, 2023 • 2h 17min
#299 – Demis Hassabis: DeepMind
Podcast: Lex Fridman Podcast Episode: #299 – Demis Hassabis: DeepMindRelease date: 2022-07-01Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationDemis Hassabis is the CEO and co-founder of DeepMind. Please support this podcast by checking out our sponsors:
– Mailgun: https://lexfridman.com/mailgun
– InsideTracker: https://insidetracker.com/lex to get 20% off
– Onnit: https://lexfridman.com/onnit to get up to 10% off
– Indeed: https://indeed.com/lex to get $75 credit
– Magic Spoon: https://magicspoon.com/lex and use code LEX to get $5 off
EPISODE LINKS:
Demis’s Twitter: https://twitter.com/demishassabis
DeepMind’s Twitter: https://twitter.com/DeepMind
DeepMind’s Instagram: https://instagram.com/deepmind
DeepMind’s Website: https://deepmind.com
Plasma control paper: https://nature.com/articles/s41586-021-04301-9
Quantum simulation paper: https://science.org/doi/10.1126/science.abj6511
The Emperor’s New Mind (book): https://amzn.to/3bx03lo
Life Ascending (book): https://amzn.to/3AhUP7z
PODCAST INFO:
Podcast website: https://lexfridman.com/podcast
Apple Podcasts: https://apple.co/2lwqZIr
Spotify: https://spoti.fi/2nEwCF8
RSS: https://lexfridman.com/feed/podcast/
YouTube Full Episodes: https://youtube.com/lexfridman
YouTube Clips: https://youtube.com/lexclips
SUPPORT & CONNECT:
– Check out the sponsors above, it’s the best way to support this podcast
– Support on Patreon: https://www.patreon.com/lexfridman
– Twitter: https://twitter.com/lexfridman
– Instagram: https://www.instagram.com/lexfridman
– LinkedIn: https://www.linkedin.com/in/lexfridman
– Facebook: https://www.facebook.com/lexfridman
– Medium: https://medium.com/@lexfridman
OUTLINE:
Here’s the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time.
(00:00) – Introduction
(07:17) – Turing Test
(14:43) – Video games
(36:18) – Simulation
(38:29) – Consciousness
(43:29) – AlphaFold
(57:09) – Solving intelligence
(1:09:28) – Open sourcing AlphaFold & MuJoCo
(1:19:34) – Nuclear fusion
(1:23:38) – Quantum simulation
(1:26:46) – Physics
(1:30:13) – Origin of life
(1:34:52) – Aliens
(1:42:59) – Intelligent life
(1:46:08) – Conscious AI
(1:59:23) – Power
(2:03:53) – Advice for young people
(2:11:59) – Meaning of life

May 7, 2023 • 3h 2min
#150 – Tom Davidson on how quickly AI could transform the world
Podcast: 80,000 Hours Podcast Episode: #150 – Tom Davidson on how quickly AI could transform the worldRelease date: 2023-05-05Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationIt’s easy to dismiss alarming AI-related predictions when you don’t know where the numbers came from.For example: what if we told you that within 15 years, it’s likely that we’ll see a 1,000x improvement in AI capabilities in a single year? And what if we then told you that those improvements would lead to explosive economic growth unlike anything humanity has seen before?You might think, “Congratulations, you said a big number — but this kind of stuff seems crazy, so I’m going to keep scrolling through Twitter.”But this 1,000x yearly improvement is a prediction based on *real economic models* created by today’s guest Tom Davidson, Senior Research Analyst at Open Philanthropy. By the end of the episode, you’ll either be able to point out specific flaws in his step-by-step reasoning, or have to at least consider the idea that the world is about to get — at a minimum — incredibly weird.Links to learn more, summary and full transcript. As a teaser, consider the following: Developing artificial general intelligence (AGI) — AI that can do 100% of cognitive tasks at least as well as the best humans can — could very easily lead us to an unrecognisable world. You might think having to train AI systems individually to do every conceivable cognitive task — one for diagnosing diseases, one for doing your taxes, one for teaching your kids, etc. — sounds implausible, or at least like it’ll take decades. But Tom thinks we might not need to train AI to do every single job — we might just need to train it to do one: AI research. And building AI capable of doing research and development might be a much easier task — especially given that the researchers training the AI are AI researchers themselves. And once an AI system is as good at accelerating future AI progress as the best humans are today — and we can run billions of copies of it round the clock — it’s hard to make the case that we won’t achieve AGI very quickly. To give you some perspective: 17 years ago we saw the launch of Twitter, the release of Al Gore's *An Inconvenient Truth*, and your first chance to play the Nintendo Wii. Tom thinks that if we have AI that significantly accelerates AI R&D, then it’s hard to imagine not having AGI 17 years from now. Wild. Host Luisa Rodriguez gets Tom to walk us through his careful reports on the topic, and how he came up with these numbers, across a terrifying but fascinating three hours. Luisa and Tom also discuss: • How we might go from GPT-4 to AI disaster • Tom’s journey from finding AI risk to be kind of scary to really scary • Whether international cooperation or an anti-AI social movement can slow AI progress down • Why it might take just a few years to go from pretty good AI to superhuman AI • How quickly the number and quality of computer chips we’ve been using for AI have been increasing • The pace of algorithmic progress • What ants can teach us about AI • And much more Chapters:Rob’s intro (00:00:00)The interview begins (00:04:53)How we might go from GPT-4 to disaster (00:13:50)Explosive economic growth (00:24:15)Are there any limits for AI scientists? (00:33:17)This seems really crazy (00:44:16)How is this going to go for humanity? (00:50:49)Why AI won’t go the way of nuclear power (01:00:13)Can we definitely not come up with an international treaty? (01:05:24)How quickly we should expect AI to “take off” (01:08:41)Tom’s report on AI takeoff speeds (01:22:28)How quickly will we go from 20% to 100% of tasks being automated by AI systems? (01:28:34)What percent of cognitive tasks AI can currently perform (01:34:27)Compute (01:39:48)Using effective compute to predict AI takeoff speeds (01:48:01)How quickly effective compute might increase (02:00:59)How quickly chips and algorithms might improve (02:12:31)How to check whether large AI models have dangerous capabilities (02:21:22)Reasons AI takeoff might take longer (02:28:39)Why AI takeoff might be very fast (02:31:52)Fast AI takeoff speeds probably means shorter AI timelines (02:34:44)Going from human-level AI to superhuman AI (02:41:34)Going from AGI to AI deployment (02:46:59)Were these arguments ever far-fetched to Tom? (02:49:54)What ants can teach us about AI (02:52:45)Rob’s outro (03:00:32)Producer: Keiran HarrisAudio mastering: Simon Monsour and Ben CordellTranscriptions: Katy Moore

May 2, 2023 • 1h 50min
168 - How to Solve AI Alignment with Paul Christiano
Podcast: Bankless Episode: 168 - How to Solve AI Alignment with Paul ChristianoRelease date: 2023-04-24Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationPaul Christiano runs the Alignment Research Center, a non-profit research organization whose mission is to align future machine learning systems with human interests. Paul previously ran the language model alignment team at OpenAI, the creators of ChatGPT. Today, we’re hoping to explore the solution-landscape to the AI Alignment problem, and hoping Paul can guide us on that journey. ------ ✨ DEBRIEF | Unpacking the episode: https://www.bankless.com/debrief-paul-christiano ------ ✨ COLLECTIBLES | Collect this episode: https://collectibles.bankless.com/mint ------ ✨ Always wanted to become a Token Analyst? Bankless Citizens get exclusive access to Token Hub. Join Them. https://bankless.cc/TokenHubRSS ------ In today’s episode, Paul answers many questions, but the overarching ones are: 1) How BIG is the AI Alignment problem? 2) How HARD is the AI Alighment problem? 3) How SOLVABLE is the AI Alignment problem? Does humanity have a chance? Tune in to hear Paul’s thoughts. ------ BANKLESS SPONSOR TOOLS: ⚖️ ARBITRUM | SCALING ETHEREUM https://bankless.cc/Arbitrum 🐙KRAKEN | MOST-TRUSTED CRYPTO EXCHANGE https://bankless.cc/kraken 🦄UNISWAP | ON-CHAIN MARKETPLACE https://bankless.cc/uniswap 👻 PHANTOM | FRIENDLY MULTICHAIN WALLET https://bankless.cc/phantom-waitlist 🦊METAMASK LEARN | HELPFUL WEB3 RESOURCE https://bankless.cc/MetaMask ------ Topics Covered 0:00 Intro 9:20 Percentage Likelihood of Death by AI 11:24 Timing 19:15 Chimps to Human Jump 21:55 Thoughts on ChatGPT 27:51 LLMs & AGI 32:49 Time to React? 38:29 AI Takeover 41:51 AI Agency 49:35 Loopholes 51:14 Training AIs to Be Honest 58:00 Psychology 59:36 How Solvable Is the AI Alignment Problem? 1:03:48 The Technical Solutions (Scalable Oversight) 1:16:14 Training AIs to be Bad?! 1:18:22 More Solutions 1:21:36 Stabby AIs 1:26:03 Public vs. Private (Lab) AIs 1:28:31 Inside Neural Nets 1:32:11 4th Solution 1:35:00 Manpower & Funding 1:38:15 Pause AI? 1:43:29 Resources & Education on AI Safety 1:46:13 Talent 1:49:00 Paul’s Day Job 1:50:15 Nobel Prize 1:52:35 Treating AIs with Respect 1:53:41 Uptopia Scenario 1:55:50 Closing & Disclaimers ------ Resources: Alignment Research Center https://www.alignment.org/ Paul Christiano’s Website https://paulfchristiano.com/ai/ ----- Not financial or tax advice. This channel is strictly educational and is not investment advice or a solicitation to buy or sell any assets or to make any financial decisions. This video is not tax advice. Talk to your accountant. Do your own research. Disclosure. From time-to-time I may add links in this newsletter to products I use. I may receive commission if you make a purchase through one of these links. Additionally, the Bankless writers hold crypto assets. See our investment disclosures here: https://www.bankless.com/disclosures

Mar 28, 2023 • 48min
Ilya Sutskever (OpenAI Chief Scientist) — Why next-token prediction could surpass human intelligence
Podcast: Dwarkesh Podcast Episode: Ilya Sutskever (OpenAI Chief Scientist) — Why next-token prediction could surpass human intelligenceRelease date: 2023-03-27Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationI went over to the OpenAI offices in San Fransisco to ask the Chief Scientist and cofounder of OpenAI, Ilya Sutskever, about:* time to AGI* leaks and spies* what's after generative models* post AGI futures* working with Microsoft and competing with Google* difficulty of aligning superhuman AIWatch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.Timestamps(00:00) - Time to AGI(05:57) - What’s after generative models?(10:57) - Data, models, and research(15:27) - Alignment(20:53) - Post AGI Future(26:56) - New ideas are overrated(36:22) - Is progress inevitable?(41:27) - Future Breakthroughs Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe

Mar 24, 2023 • 53min
Tom Holland on History, Christianity, and the Value of the Countryside
Podcast: Conversations with Tyler Episode: Tom Holland on History, Christianity, and the Value of the CountrysideRelease date: 2023-03-22Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationHistorian Tom Holland joined Tyler to discuss in what ways his Christianity is influenced by Lord Byron, how the Book of Revelation precipitated a revolutionary tradition, which book of the Bible is most foundational for Western liberalism, the political differences between Paul and Jesus, why America is more pro-technology than Europe, why Herodotus is his favorite writer, why the Greeks and Persians didn’t industrialize despite having advanced technology, how he feels about devolution in the United Kingdom and the potential of Irish unification, what existential problem the Church of England faces, how the music of Ennio Morricone helps him write for a popular audience, why Jurassic Park is his favorite movie, and more. Read a full transcript enhanced with helpful links, or watch the full video. Recorded February 1st, 2023 Other ways to connect Follow us on Twitter and Instagram Follow Tyler on Twitter Follow Tom on Twitter Email us: cowenconvos@mercatus.gmu.edu Learn more about Conversations with Tyler and other Mercatus Center podcasts here. Photo credit: Sadie Holland

Mar 15, 2023 • 49min
Is our search for an objective morality misguided? | Slavoj Žižek, Joanna Kavenna, Simon Blackburn
Podcast: Philosophy For Our Times Episode: Is our search for an objective morality misguided? | Slavoj Žižek, Joanna Kavenna, Simon BlackburnRelease date: 2023-03-14Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationShould we think of morality in terms of objective truth or social consensus?Looking for a link we mentioned? It's here: https://linktr.ee/philosophyforourtimesOnce the fashion of a postmodern age, moral relativism has always had its detractors, many of them religious. But now a new breed of atheist celebrity thinkers, from Sam Harris to Peter Singer, are making claims for the existence of absolute moral truths. Critics argue that like authoritarian moralists of the past, they use so-called 'objective' morality to shore up to their own prejudices and silence dissent. Firebrand philosopher Slavoj Žižek, bestselling author of Zed Joanna Kavenna, and philosopher and author of Truth Simon Blackburn debate objective morality in a postmodern age. Hosted by Professor and Chair of Jurisprudence at the University of Oxford, Ruth Chang.There are thousands of big ideas to discover at IAI.tv – videos, articles, and courses waiting for you to explore. Find out more: https://iai.tv/podcast-offers?utm_source=podcast&utm_medium=shownotes&utm_campaign=[iai-tv-episode-title] See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Mar 15, 2023 • 54min
Yasheng Huang on the Development of the Chinese State
Podcast: Conversations with Tyler Episode: Yasheng Huang on the Development of the Chinese StateRelease date: 2023-03-08Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationYasheng Huang has written two of Tyler’s favorite books on China: Capitalism with Chinese Characteristics, which contrasts an entrepreneurial rural China and a state-controlled urban China, and The Rise and Fall of the EAST, which argues that Keju—China’s civil service exam system—played a key role in the growth and expanding power of the Chinese state. Yasheng joined Tyler to discuss China’s lackluster technological innovation, why declining foreign investment is more of a concern than a declining population, why Chinese literacy stagnated in the 19th century, how he believes the imperial exam system deprived China of a thriving civil society, why Chinese succession has been so stable, why the Six Dynasties is his favorite period in Chinese history, why there were so few female emperors, why Chinese and Chinese Americans have done less well becoming top CEOs of American companies compared to Indians and Indian Americans, where he’d send someone on a two week trip to China, what he learned from János Kornai, and more. Read a full transcript enhanced with helpful links, or watch the full video. Recorded January 17th, 2023 Other ways to connect Follow us on Twitter and Instagram Follow Tyler on Twitter Follow Yasheng on Twitter Email us: cowenconvos@mercatus.gmu.edu Learn more about Conversations with Tyler and other Mercatus Center podcasts here. Photo credit: MIT Sloan School

Mar 13, 2023 • 2h 9min
Effective Accelerationism and the AI Safety Debate with Bayeslord, Beff Jezoz, and Nathan Labenz
Podcast: "Moment of Zen" Episode: Effective Accelerationism and the AI Safety Debate with Bayeslord, Beff Jezoz, and Nathan LabenzRelease date: 2023-03-11Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationAnonymous founders of the Effective Accelerationist (e/acc) movement @Bayeslord and Beff Jezoz (@BasedBeff) join Erik Torenberg, Dan Romero, and Nathan Labenz to debate views on AI safety.We're hiring across the board at Turpentine and for Erik's personal team on other projects he's incubating. He's hiring a Chief of Staff, EA, Head of Special Projects, Investment Associate, and more. For a list of JDs, check out: eriktorenberg.com.RECOMMENDED PODCAST:The HR industry is at a crossroads. What will it take to construct the next generation of incredible businesses – and where can people leaders have the most business impact? Hosts Nolan Church and Kelli Dragovich have been through it all, the highs and the lows – IPOs, layoffs, executive turnover, board meetings, culture changes, and more. With a lineup of industry vets and experts, Nolan and Kelli break down the nitty-gritty details, trade offs, and dynamics of constructing high performing companies. Through unfiltered conversations that can only happen between seasoned practitioners, Kelli and Nolan dive deep into the kind of leadership-level strategy that often happens behind closed doors. Check out the first episode with the architect of Netflix’s culture deck Patty McCord.https://link.chtbl.com/hrhereticsTIMESTAMPS:(00:00) Episode preview(03:00) Intro to effective accelerationism(08:00) Differences between effective accelerationism and effective altruism(23:00) Effective accelerationism is bottoms-up(42:00) Transhumanism(46:00) "Equanimity amidst the singularity"(48:30) Why AI safety is the wrong frame(56:00) Pushing back against effective accelerationism(01:06:00) The case for AI safety(01:24:00) Upgrading civilizational infrastructure(01:33:00) Effective accelerationism is anti-fragile(01:39:00) Will we botch AI like we botched nuclear?(01:46:00) Hidden costs of emphasizing downsides(2:00:00) Are we in the same position as neanderthals, before humans?(2:09:00) "Doomerism has an unpriced opportunity cost of upside"SPONSORS: Beehiiv | Shopify | SecureframeHead to Beehiiv, the newsletter platform built for growth, to power your own. Connect with premium brands, scale your audience, and deliver a beautiful UX that stands out in an inbox. 🐝 to https://Beehiiv.com and use code “MOZ” for 20% off your first three months-Shopify: https://shopify.com/momentofzen for a $1/month trial periodShopify is the global commerce platform that helps you sell at every stage of your business. Shopify powers 10% of all e-commerce in the US. And Shopify’s the global force behind Allbirds, Rothy’s, and Brooklinen, and 1,000,000s of other entrepreneurs across 175 countries. From their all-in-one e-commerce platform, to their in-person POS system – wherever and whatever you’re selling, Shopify’s got you covered. With free Shopify Magic, sell more with less effort by whipping up captivating content that converts – from blog posts to product descriptions using AI. Sign up for $1/month trial period: https://shopify.com/momentofzen-Secureframe (www.secureframe.comSecureframe is the leading all-in-one platform for security and privacy compliance. Get SOC-2 audit ready in weeks, not months. I believe in Secureframe so much that I invested in it, and I recommend it to all my portfolio companies. Sign up for a free demo and mention MOMENT OF ZEN during your demo to get 20% off your first year.

Mar 5, 2023 • 12min
Robin Hanson, George Mason University | Deflecting The Sacred
Podcast: Foresight Institute Radio Episode: Robin Hanson, George Mason University | Deflecting The SacredRelease date: 2023-03-02Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationRobin Dale Hanson is an associate professor of economics at George Mason University and a research associate at the Future of Humanity Institute of Oxford University. He is known as an expert on idea futures and markets, and he was involved in the creation of the Foresight Exchange and DARPA’s FutureMAP project. He invented market scoring rules like LMSR (Logarithmic Market Scoring Rule) used by prediction markets such as Consensus Point (where Hanson is Chief Scientist), and has conducted research on signaling.When attempting to affect change in the world, you will inevitably run up against concepts that others consider sacred. This creates a very tough barrier to change, especially if you are trying to change something like democracy, family, or religion. The essence of sacred is the bond shared between those who consider a particular idea sacred. It’s difficult to see things the same when seeing them in high resolution, so sacred things tend to be seen in abstract detail, even when looking at them up close, to allow for concensus.Session Summary: Robin Hanson, George Mason University | Deflecting The Sacred - Foresight InstituteThe Foresight Institute is a research organization and non-profit that supports the beneficial development of high-impact technologies. Since our founding in 1987 on a vision of guiding powerful technologies, we have continued to evolve into a many-armed organization that focuses on several fields of science and technology that are too ambitious for legacy institutions to support.Allison Duettmann is the president and CEO of Foresight Institute. She directs the Intelligent Cooperation, Molecular Machines, Biotech & Health Extension, Neurotech, and Space Programs, Fellowships, Prizes, and Tech Trees, and shares this work with the public. She founded Existentialhope.com, co-edited Superintelligence: Coordination & Strategy, co-authored Gaming the Future, and co-initiated The Longevity Prize. Apply to Foresight’s virtual salons and in person workshops here!We are entirely funded by your donations. If you enjoy what we do please consider donating through our donation page.Visit our website for more content, or join us here:TwitterFacebookLinkedInEvery word ever spoken on this podcast is now AI-searchable using Fathom.fm, a search engine for podcasts. Hosted on Acast. See acast.com/privacy for more information.

Mar 3, 2023 • 32min
#59 – Chris Miller on the History of Semiconductors, TSMC, and the CHIPS Act
Podcast: Hear This Idea Episode: #59 – Chris Miller on the History of Semiconductors, TSMC, and the CHIPS ActRelease date: 2023-03-02Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationChris Miller is an Associate Professor of International History at Tufts University and author of the book “Chip War: The Fight for the World's Most Critical Technology” (the Financial Times Business Book of the Year). He is also a Visiting Fellow at the American Enterprise Institute, and Eurasia Director at the Foreign Policy Research Institute.
Over the next few episodes we will be exploring the potential for catastrophe cause by advanced artificial intelligence. But before we look ahead, we wanted to give a primer on where we are today: on the history and trends behind the development of AI so far. In this episode, we discuss:
How semiconductors have historically been related to US military strategy
How the Taiwanese company TSMC became such an important player in this space — while other countries’ attempts have failed
What the CHIPS Act signals about attitudes to compute governance in the decade ahead
Further reading is available on our website: hearthisidea.com/episodes/miller
If you have any feedback, you can get a free book for filling out our new feedback form. You can also get in touch through our website or on Twitter. Consider leaving us a review wherever you're listening to this — it's the best free way to support the show. Thanks for listening!