The Valmy
https://thevalmy.com/
Latest episodes

Oct 31, 2023 • 1h 52min
Tyler Cowen: From Avant-Garde to Pop (Bonus DJ Episode)
Podcast: Tetragrammaton with Rick Rubin Episode: Tyler Cowen: From Avant-Garde to Pop (Bonus DJ Episode)Release date: 2023-10-18Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationTyler Cowen has long nurtured an obsession with music. It’s one of the few addictions Tyler believes is actually conducive to a fulfilling intellectual life.In this bonus episode, an addendum to Rick’s conversation with Tyler, Rick sits with Tyler as he plays and talks through the music that moves him: from the outer bounds of the avant-garde to contemporary pop music and all points in between.

Sep 27, 2023 • 1h 13min
233 | Hugo Mercier on Reasoning and Skepticism
Podcast: Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas Episode: 233 | Hugo Mercier on Reasoning and SkepticismRelease date: 2023-04-17Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationHere at the Mindscape Podcast, we are firmly pro-reason. But what does that mean, fundamentally and in practice? How did humanity come into the idea of not just doing things, but doing things for reasons? In this episode we talk with cognitive scientist Hugo Mercier about these issues. He is the co-author (with Dan Sperber) of The Enigma of Reason, about how the notion of reason came to be, and more recently author of Not Born Yesterday, about who we trust and what we believe. He argues that our main shortcoming is not being insufficiently skeptical of radical claims, but of being too skeptical of claims that don't fit our views.Support Mindscape on Patreon.Hugo Mercier received a Ph.D. in cognitive sciences from the École des Hautes Études en Sciences Sociales. He is currently a Permanent CNRS Research Scientist at the Institut Jean Nicod, Paris. Among his awards are the Prime d’excellence from the CNRS.Web siteGoogle Scholar publicationsAmazon author pageTwitterSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Aug 30, 2023 • 46min
Tom Holland: Dominion
Podcast: The Book Club Episode: Tom Holland: DominionRelease date: 2019-12-04Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationIn this week's Book Club, Sam's guest is the historian Tom Holland, author of the new book Dominion: The Making of the Western Mind. The book, though as Tom remarks, you might not know it from the cover, is essentially a history of Christianity -- and an account of the myriad ways, many of them invisible to us, that it has shaped and continues to shape Western culture. It's a book and an argument that takes us from Ancient Babylon to Harvey Weinstein's hotel room, draws in the Beatles and the Nazis, and orbits around two giant figures: St Paul and Nietzsche. Is there a single discernible, distinctive Christian way of thinking? Is secularism Christianity by other means? And are our modern-day culture wars between alt-righters and woke progressives a post-Christian phenomenon or, as Tom argues, essentially a civil war between two Christian sects?
Presented by Sam Leith.

Aug 28, 2023 • 1h 9min
How quickly is AI advancing? And should you be working in the field? (with Danny Hernandez)
Podcast: Clearer Thinking with Spencer Greenberg Episode: How quickly is AI advancing? And should you be working in the field? (with Danny Hernandez)Release date: 2023-08-23Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationRead the full transcript here. Along what axes and at what rates is the AI industry growing? What algorithmic developments have yielded the greatest efficiency boosts? When, if ever, will we hit the upper limits of the amount of computing power, data, money, etc., we can throw at AI development? Why do some people seemingly become fixated on particular tasks that particular AI models can't perform and draw the conclusion that AIs are still pretty dumb and won't be taking our jobs any time soon? What kinds of tasks are more or less easily automatable? Should more people work on AI? What does it mean to "take ownership" of our friendships? What sorts of thinking patterns employed by AI engineers can be beneficial in other areas of life? How can we make better decisions, especially about large things like careers and relationships?Danny Hernandez was an early AI researcher at OpenAI and Anthropic. He's best known for measuring macro progress in AI. For example, he helped show that the compute of the largest training runs was growing at 10x per year between 2012 and 2017. He also helped show an algorithmic equivalent of Moore's Law that was faster, and he's done work on scaling laws and mechanistic interpretability of learning from repeated data. He is currently focused on alignment research. StaffSpencer Greenberg — Host / DirectorJosh Castle — ProducerRyan Kessler — Audio EngineerUri Bram — FactotumWeAmplify — TranscriptionistsMiles Kestran — MarketingMusicBroke for FreeJosh WoodwardLee RosevereQuiet Music for Tiny Robotswowamusiczapsplat.comAffiliatesClearer ThinkingGuidedTrackMind EasePositlyUpLift[Read more]

Aug 23, 2023 • 1h 9min
Samo Burja - The Great Founder Theory of History
Podcast: Invest Like the Best with Patrick O'Shaughnessy Episode: Samo Burja - The Great Founder Theory of History - [Invest Like the Best, EP.339]Release date: 2023-08-01Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationMy guest today is Samo Burja. Samo is the founder of consulting firm, Bismark Analysis, and has dedicated his life’s work to understanding why there has never been an immortal society. His research focuses on institutions, the founders behind them, how they rise and why they always fall in the end. As you’ll hear, Samo has an encyclopedic grasp of history and his work has led him to some fascinating theories about human progress, the nature of exceptional founders, and the future of different societies across the world. Please enjoy my conversation with Samo Burja.Listen to Founders PodcastFounders Episode 311: James CameronFor the full show notes, transcript, and links to mentioned content, check out the episode page here.-----This episode is brought to you by Tegus. Tegus is the modern research platform for leading investors. Tired of running your own expert calls to get up to speed on a company? Tegus lets you ramp faster and find answers to critical questions more efficiently than any alternative method. The gold standard for research, the Tegus platform delivers unmatched access to timely, qualitative insights through the largest and most differentiated expert call transcript database. With over 60,000 transcripts spanning 22,000 public and private companies, investors can accelerate their fundamental research process by discovering highly-differentiated and reliable insights that can’t be found anywhere else in the market. As a listener, drive your next investment thesis forward with Tegus for free at tegus.co/patrick.-----Invest Like the Best is a property of Colossus, LLC. For more episodes of Invest Like the Best, visit joincolossus.com/episodes. Past guests include Tobi Lutke, Kevin Systrom, Mike Krieger, John Collison, Kat Cole, Marc Andreessen, Matthew Ball, Bill Gurley, Anu Hariharan, Ben Thompson, and many more.Stay up to date on all our podcasts by signing up to Colossus Weekly, our quick dive every Sunday highlighting the top business and investing concepts from our podcasts and the best of what we read that week. Sign up here.Follow us on Twitter: @patrick_oshag | @JoinColossusShow Notes (00:02:52) - (First question) - The core thesis behind the Great Founder Theory(00:06:40) - Great ideas inevitably being discovered at some point in history (00:08:45) - The historic implications of a global adoption of the Great Founder Theory(00:10:51) - The different possible directions of future trends(00:17:08) - Distinctions between great founders versus live players (00:22:15) - Common misconceptions about what qualifies one as a great founder(00:24:38) - Noteworthy great founders in the United States(00:28:34) - Recurring observable traits and common themes of great founders (00:31:29) - Using caution when projecting a mythic lens onto great founders (00:37:53) - Social technology as the upstream effects of prior material technology(00:43:32) - Whether or not institutions play a role in propagating the work of great founders (00:49:08) - The role of power and differences between owned and borrowed power (00:56:51) - Additional ideas that play an outsized role in shaping the world (01:01:09) - A differing worldview to his own that he finds interesting(01:04:53) - Whether or not capital allocators can benefit from the Great Founder Theory (01:07:37) - The kindest thing anyone has ever done for him

Aug 17, 2023 • 4h 24min
Stephen Wolfram — Constructing the Computational Paradigm
Podcast: The Joe Walker Podcast Episode: Stephen Wolfram — Constructing the Computational ParadigmRelease date: 2023-08-16Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationStephen Wolfram is a physicist, computer scientist and businessman. He is the founder and CEO of Wolfram Research, the creator of Mathematica and Wolfram Alpha, and the author of A New Kind of Science. Full transcript available at: jnwpod.com.See omnystudio.com/listener for privacy information.

Aug 8, 2023 • 1h 59min
Dario Amodei (Anthropic CEO) - Scaling, Alignment, & AI Progress
Podcast: Dwarkesh Podcast Episode: Dario Amodei (Anthropic CEO) - Scaling, Alignment, & AI ProgressRelease date: 2023-08-08Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationHere is my conversation with Dario Amodei, CEO of Anthropic.Dario is hilarious and has fascinating takes on what these models are doing, why they scale so well, and what it will take to align them.Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.Timestamps(00:00:00) - Introduction(00:01:00) - Scaling(00:15:46) - Language(00:22:58) - Economic Usefulness(00:38:05) - Bioterrorism(00:43:35) - Cybersecurity(00:47:19) - Alignment & mechanistic interpretability(00:57:43) - Does alignment research require scale?(01:05:30) - Misuse vs misalignment(01:09:06) - What if AI goes well?(01:11:05) - China(01:15:11) - How to think about alignment(01:31:31) - Is modern security good enough?(01:36:09) - Inefficiencies in training(01:45:53) - Anthropic’s Long Term Benefit Trust(01:51:18) - Is Claude conscious?(01:56:14) - Keeping a low profile Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe

Jul 7, 2023 • 52min
Will Everyone Have a Personal AI? With Mustafa Suleyman, Founder of DeepMind and Inflection
Podcast: No Priors: Artificial Intelligence | Technology | Startups Episode: Will Everyone Have a Personal AI? With Mustafa Suleyman, Founder of DeepMind and InflectionRelease date: 2023-05-11Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationMustafa Suleyman, co-founder of DeepMind and now co-founder and CEO of Inflection AI, joins Sarah and Elad to discuss how his interests in counseling, conflict resolution, and intelligence led him to start an AI lab that pioneered deep reinforcement learning, lead applied AI and policy efforts at Google, and more recently found Inflection and launch Pi.Mustafa offers insights on the changing structure of the web, the pressure Google faces in the age of AI personalization, predictions for model architectures, how to measure emotional intelligence in AIs, and the thinking behind Pi: the AI companion that knows you, is aligned to your interests, and provides companionship.Sarah and Elad also discuss Mustafa’s upcoming book, The Coming Wave (release September 12, 2023), which examines the political ramifications of AI and digital biology revolutions.No Priors is now on YouTube! Subscribe to the channel on YouTube and like this episode.Show Links:
Forbes - Startup From Reid Hoffman and Mustafa Suleyman Debuts ChatBot
Inflection.ai
Mustafa-Suleyman.ai
Sign up for new podcasts every week. Email feedback to show@no-priors.comFollow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @mustafasuleymnShow Notes:[00:06] - From Conflict Resolution to AI Pioneering[10:36] - Defining Intelligence[15:32] - DeepMind's Journey and Breakthroughs[24:45] - The Future of Personal AI Companionship[33:22] - AI and the Future of Personalized Content[41:49] - The Launch of Pi[51:12] - Mustafa’s New Book The Coming Wave

Jun 27, 2023 • 3h 7min
Carl Shulman (Pt 2) - AI Takeover, Bio & Cyber Attacks, Detecting Deception, & Humanity's Far Future
Podcast: Dwarkesh Podcast Episode: Carl Shulman (Pt 2) - AI Takeover, Bio & Cyber Attacks, Detecting Deception, & Humanity's Far FutureRelease date: 2023-06-26Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationThe second half of my 7 hour conversation with Carl Shulman is out!My favorite part! And the one that had the biggest impact on my worldview.Here, Carl lays out how an AI takeover might happen:* AI can threaten mutually assured destruction from bioweapons,* use cyber attacks to take over physical infrastructure,* build mechanical armies,* spread seed AIs we can never exterminate,* offer tech and other advantages to collaborating countries, etcPlus we talk about a whole bunch of weird and interesting topics which Carl has thought about:* what is the far future best case scenario for humanity* what it would look like to have AI make thousands of years of intellectual progress in a month* how do we detect deception in superhuman models* does space warfare favor defense or offense* is a Malthusian state inevitable in the long run* why markets haven't priced in explosive economic growth* & much moreCarl also explains how he developed such a rigorous, thoughtful, and interdisciplinary model of the biggest problems in the world.Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.Catch part 1 hereTimestamps(0:00:00 - Intro (0:00:47 - AI takeover via cyber or bio (0:32:27 - Can we coordinate against AI? (0:53:49 - Human vs AI colonizers (1:04:55 - Probability of AI takeover (1:21:56 - Can we detect deception? (1:47:25 - Using AI to solve coordination problems (1:56:01 - Partial alignment (2:11:41 - AI far future (2:23:04 - Markets & other evidence (2:33:26 - Day in the life of Carl Shulman (2:47:05 - Space warfare, Malthusian long run, & other rapid fire Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe

Jun 16, 2023 • 1h 3min
Predictable updating about AI risk
Podcast: Joe Carlsmith Audio Episode: Predictable updating about AI riskRelease date: 2023-05-08Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationHow worried about AI risk will we feel in the future, when we can see advanced machine intelligence up close? We should worry accordingly now. Text version here: https://joecarlsmith.com/2023/05/08/predictable-updating-about-ai-risk
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.