The Valmy

Peter Hartree
undefined
Aug 23, 2023 • 1h 9min

Samo Burja - The Great Founder Theory of History

Podcast: Invest Like the Best with Patrick O'Shaughnessy Episode: Samo Burja - The Great Founder Theory of History - [Invest Like the Best, EP.339]Release date: 2023-08-01Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationMy guest today is Samo Burja. Samo is the founder of consulting firm, Bismark Analysis, and has dedicated his life’s work to understanding why there has never been an immortal society. His research focuses on institutions, the founders behind them, how they rise and why they always fall in the end. As you’ll hear, Samo has an encyclopedic grasp of history and his work has led him to some fascinating theories about human progress, the nature of exceptional founders, and the future of different societies across the world. Please enjoy my conversation with Samo Burja.Listen to Founders PodcastFounders Episode 311: James CameronFor the full show notes, transcript, and links to mentioned content, check out the episode page here.-----This episode is brought to you by Tegus. Tegus is the modern research platform for leading investors. Tired of running your own expert calls to get up to speed on a company? Tegus lets you ramp faster and find answers to critical questions more efficiently than any alternative method. The gold standard for research, the Tegus platform delivers unmatched access to timely, qualitative insights through the largest and most differentiated expert call transcript database. With over 60,000 transcripts spanning 22,000 public and private companies, investors can accelerate their fundamental research process by discovering highly-differentiated and reliable insights that can’t be found anywhere else in the market. As a listener, drive your next investment thesis forward with Tegus for free at tegus.co/patrick.-----Invest Like the Best is a property of Colossus, LLC. For more episodes of Invest Like the Best, visit joincolossus.com/episodes. Past guests include Tobi Lutke, Kevin Systrom, Mike Krieger, John Collison, Kat Cole, Marc Andreessen, Matthew Ball, Bill Gurley, Anu Hariharan, Ben Thompson, and many more.Stay up to date on all our podcasts by signing up to Colossus Weekly, our quick dive every Sunday highlighting the top business and investing concepts from our podcasts and the best of what we read that week. Sign up here.Follow us on Twitter: @patrick_oshag | @JoinColossusShow Notes (00:02:52) - (First question) - The core thesis behind the Great Founder Theory(00:06:40) - Great ideas inevitably being discovered at some point in history (00:08:45) - The historic implications of a global adoption of the Great Founder Theory(00:10:51) - The different possible directions of future trends(00:17:08) - Distinctions between great founders versus live players  (00:22:15) - Common misconceptions about what qualifies one as a great founder(00:24:38) - Noteworthy great founders in the United States(00:28:34) - Recurring observable traits and common themes of great founders  (00:31:29) - Using caution when projecting a mythic lens onto great founders (00:37:53) - Social technology as the upstream effects of prior material technology(00:43:32) - Whether or not institutions play a role in propagating the work of great founders   (00:49:08) - The role of power and differences between owned and borrowed power  (00:56:51) - Additional ideas that play an outsized role in shaping the world (01:01:09) - A differing worldview to his own that he finds interesting(01:04:53) - Whether or not capital allocators can benefit from the Great Founder Theory (01:07:37) - The kindest thing anyone has ever done for him
undefined
Aug 17, 2023 • 4h 24min

Stephen Wolfram — Constructing the Computational Paradigm

Podcast: The Joe Walker Podcast Episode: Stephen Wolfram — Constructing the Computational ParadigmRelease date: 2023-08-16Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationStephen Wolfram is a physicist, computer scientist and businessman. He is the founder and CEO of Wolfram Research, the creator of Mathematica and Wolfram Alpha, and the author of A New Kind of Science. Full transcript available at: jnwpod.com.See omnystudio.com/listener for privacy information.
undefined
Aug 8, 2023 • 1h 59min

Dario Amodei (Anthropic CEO) — The hidden pattern behind every AI breakthrough

Podcast: Dwarkesh Podcast Episode: Dario Amodei (Anthropic CEO) — The hidden pattern behind every AI breakthroughRelease date: 2023-08-08Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationHere is my conversation with Dario Amodei, CEO of Anthropic.Dario is hilarious and has fascinating takes on what these models are doing, why they scale so well, and what it will take to align them.Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.Timestamps(00:00:00) - Introduction(00:01:00) - Scaling(00:15:46) - Language(00:22:58) - Economic Usefulness(00:38:05) - Bioterrorism(00:43:35) - Cybersecurity(00:47:19) - Alignment & mechanistic interpretability(00:57:43) - Does alignment research require scale?(01:05:30) - Misuse vs misalignment(01:09:06) - What if AI goes well?(01:11:05) - China(01:15:11) - How to think about alignment(01:31:31) - Is modern security good enough?(01:36:09) - Inefficiencies in training(01:45:53) - Anthropic’s Long Term Benefit Trust(01:51:18) - Is Claude conscious?(01:56:14) - Keeping a low profile Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
undefined
Jul 7, 2023 • 52min

Will Everyone Have a Personal AI? With Mustafa Suleyman, Founder of DeepMind and Inflection

Podcast: No Priors: Artificial Intelligence | Technology | Startups Episode: Will Everyone Have a Personal AI? With Mustafa Suleyman, Founder of DeepMind and InflectionRelease date: 2023-05-11Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationMustafa Suleyman, co-founder of DeepMind and now co-founder and CEO of Inflection AI, joins Sarah and Elad to discuss how his interests in counseling, conflict resolution, and intelligence led him to start an AI lab that pioneered deep reinforcement learning, lead applied AI and policy efforts at Google, and more recently found Inflection and launch Pi.Mustafa offers insights on the changing structure of the web, the pressure Google faces in the age of AI personalization, predictions for model architectures, how to measure emotional intelligence in AIs, and the thinking behind Pi: the AI companion that knows you, is aligned to your interests, and provides companionship.Sarah and Elad also discuss Mustafa’s upcoming book, The Coming Wave (release September 12, 2023), which examines the political ramifications of AI and digital biology revolutions.No Priors is now on YouTube! Subscribe to the channel on YouTube and like this episode.Show Links: Forbes - Startup From Reid Hoffman and Mustafa Suleyman Debuts ChatBot Inflection.ai Mustafa-Suleyman.ai Sign up for new podcasts every week. Email feedback to show@no-priors.comFollow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @mustafasuleymnShow Notes:[00:06] - From Conflict Resolution to AI Pioneering[10:36] - Defining Intelligence[15:32] - DeepMind's Journey and Breakthroughs[24:45] - The Future of Personal AI Companionship[33:22] - AI and the Future of Personalized Content[41:49] - The Launch of Pi[51:12] - Mustafa’s New Book The Coming Wave
undefined
Jun 27, 2023 • 3h 7min

Carl Shulman (Pt 2) — AI Takeover, bio & cyber attacks, detecting deception, & humanity's far future

Podcast: Dwarkesh Podcast Episode: Carl Shulman (Pt 2) — AI Takeover, bio & cyber attacks, detecting deception, & humanity's far futureRelease date: 2023-06-26Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationThe second half of my 7 hour conversation with Carl Shulman is out!My favorite part! And the one that had the biggest impact on my worldview.Here, Carl lays out how an AI takeover might happen:* AI can threaten mutually assured destruction from bioweapons,* use cyber attacks to take over physical infrastructure,* build mechanical armies,* spread seed AIs we can never exterminate,* offer tech and other advantages to collaborating countries, etcPlus we talk about a whole bunch of weird and interesting topics which Carl has thought about:* what is the far future best case scenario for humanity* what it would look like to have AI make thousands of years of intellectual progress in a month* how do we detect deception in superhuman models* does space warfare favor defense or offense* is a Malthusian state inevitable in the long run* why markets haven't priced in explosive economic growth* & much moreCarl also explains how he developed such a rigorous, thoughtful, and interdisciplinary model of the biggest problems in the world.Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.Catch part 1 hereTimestamps(0:00:00 - Intro (0:00:47 - AI takeover via cyber or bio (0:32:27 - Can we coordinate against AI? (0:53:49 - Human vs AI colonizers (1:04:55 - Probability of AI takeover (1:21:56 - Can we detect deception? (1:47:25 - Using AI to solve coordination problems (1:56:01 - Partial alignment (2:11:41 - AI far future (2:23:04 - Markets & other evidence (2:33:26 - Day in the life of Carl Shulman (2:47:05 - Space warfare, Malthusian long run, & other rapid fire Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
undefined
Jun 16, 2023 • 1h 3min

Predictable updating about AI risk

Podcast: Joe Carlsmith Audio Episode: Predictable updating about AI riskRelease date: 2023-05-08Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationHow worried about AI risk will we feel in the future, when we can see advanced machine intelligence up close? We should worry accordingly now. Text version here: https://joecarlsmith.com/2023/05/08/predictable-updating-about-ai-risk 
undefined
Jun 14, 2023 • 2h 44min

Carl Shulman (Pt 1) — Intelligence explosion, primate evolution, robot doublings, & alignment

Podcast: Dwarkesh Podcast Episode: Carl Shulman (Pt 1) — Intelligence explosion, primate evolution, robot doublings, & alignmentRelease date: 2023-06-14Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationIn terms of the depth and range of topics, this episode is the best I’ve done.No part of my worldview is the same after talking with Carl Shulman. He's the most interesting intellectual you've never heard of.We ended up talking for 8 hours, so I'm splitting this episode into 2 parts.This part is about Carl’s model of an intelligence explosion, which integrates everything from:* how fast algorithmic progress & hardware improvements in AI are happening,* what primate evolution suggests about the scaling hypothesis,* how soon before AIs could do large parts of AI research themselves, and whether there would be faster and faster doublings of AI researchers,* how quickly robots produced from existing factories could take over the economy.We also discuss the odds of a takeover based on whether the AI is aligned before the intelligence explosion happens, and Carl explains why he’s more optimistic than Eliezer.The next part, which I’ll release next week, is about all the specific mechanisms of an AI takeover, plus a whole bunch of other galaxy brain stuff.Maybe 3 people in the world have thought as rigorously as Carl about so many interesting topics. This was a huge pleasure.Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.Timestamps(00:00:00) - Intro(00:01:32) - Intelligence Explosion(00:18:03) - Can AIs do AI research?(00:39:00) - Primate evolution(01:03:30) - Forecasting AI progress(01:34:20) - After human-level AGI(02:08:39) - AI takeover scenarios Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
undefined
Jun 8, 2023 • 52min

Peter Singer on Utilitarianism, Influence, and Controversial Ideas

Podcast: Conversations with Tyler Episode: Peter Singer on Utilitarianism, Influence, and Controversial IdeasRelease date: 2023-06-07Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationPeter Singer is one of the world’s most influential living philosophers, whose ideas have motivated millions of people to change how they eat, how they give, and how they interact with each other and the natural world. Peter joined Tyler to discuss whether utilitarianism is only tractable at the margin, how Peter thinks about the meat-eater problem, why he might side with aliens over humans, at what margins he would police nature, the utilitarian approach to secularism and abortion, what he’s learned producing the Journal of Controversial Ideas, what he’d change about the current Effective Altruism movement, where Derek Parfit went wrong, to what extent we should respect the wishes of the dead, why professional philosophy is so boring, his advice on how to enjoy our lives, what he’ll be doing after retiring from teaching, and more. Read a full transcript enhanced with helpful links, or watch the full video.  Recorded May 25th, 2023 Other ways to connect Follow us on Twitter and Instagram Follow Tyler on Twitter Follow Peter on Twitter Email us: cowenconvos@mercatus.gmu.edu Learn more about Conversations with Tyler and other Mercatus Center podcasts here. Photo credit: Katarzyna de Lazari-Radek
undefined
Jun 8, 2023 • 3h 27min

#152 – Joe Carlsmith on navigating serious philosophical confusion

Podcast: 80,000 Hours Podcast Episode: #152 – Joe Carlsmith on navigating serious philosophical confusionRelease date: 2023-05-19Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationWhat is the nature of the universe? How do we make decisions correctly? What differentiates right actions from wrong ones?Such fundamental questions have been the subject of philosophical and theological debates for millennia. But, as we all know, and surveys of expert opinion make clear, we are very far from agreement. So... with these most basic questions unresolved, what’s a species to do?In today's episode, philosopher Joe Carlsmith — Senior Research Analyst at Open Philanthropy — makes the case that many current debates in philosophy ought to leave us confused and humbled. These are themes he discusses in his PhD thesis, A stranger priority? Topics at the outer reaches of effective altruism.Links to learn more, summary and full transcript.To help transmit the disorientation he thinks is appropriate, Joe presents three disconcerting theories — originating from him and his peers — that challenge humanity's self-assured understanding of the world.The first idea is that we might be living in a computer simulation, because, in the classic formulation, if most civilisations go on to run many computer simulations of their past history, then most beings who perceive themselves as living in such a history must themselves be in computer simulations. Joe prefers a somewhat different way of making the point, but, having looked into it, he hasn't identified any particular rebuttal to this 'simulation argument.'If true, it could revolutionise our comprehension of the universe and the way we ought to live...Other two ideas cut for length — click here to read the full post.These are just three particular instances of a much broader set of ideas that some have dubbed the "train to crazy town." Basically, if you commit to always take philosophy and arguments seriously, and try to act on them, it can lead to what seem like some pretty crazy and impractical places. So what should we do with this buffet of plausible-sounding but bewildering arguments?Joe and Rob discuss to what extent this should prompt us to pay less attention to philosophy, and how we as individuals can cope psychologically with feeling out of our depth just trying to make the most basic sense of the world.In today's challenging conversation, Joe and Rob discuss all of the above, as well as:What Joe doesn't like about the drowning child thought experimentAn alternative thought experiment about helping a stranger that might better highlight our intrinsic desire to help othersWhat Joe doesn't like about the expression “the train to crazy town”Whether Elon Musk should place a higher probability on living in a simulation than most other peopleWhether the deterministic twin prisoner’s dilemma, if fully appreciated, gives us an extra reason to keep promisesTo what extent learning to doubt our own judgement about difficult questions -- so-called “epistemic learned helplessness” -- is a good thingHow strong the case is that advanced AI will engage in generalised power-seeking behaviourChapters:Rob’s intro (00:00:00)The interview begins (00:09:21)Downsides of the drowning child thought experiment (00:12:24)Making demanding moral values more resonant (00:24:56)The crazy train (00:36:48)Whether we’re living in a simulation (00:48:50)Reasons to doubt we’re living in a simulation, and practical implications if we are (00:57:02)Rob's explainer about anthropics (01:12:27)Back to the interview (01:19:53)Decision theory and affecting the past (01:23:33)Rob's explainer about decision theory (01:29:19)Back to the interview (01:39:55)Newcomb's problem (01:46:14)Practical implications of acausal decision theory (01:50:04)The hitchhiker in the desert (01:55:57)Acceptance within philosophy (02:01:22)Infinite ethics (02:04:35)Rob's explainer about the expanding spheres approach (02:17:05)Back to the interview (02:20:27)Infinite ethics and the utilitarian dream (02:27:42)Rob's explainer about epicycles (02:29:30)Back to the interview (02:31:26)What to do with all of these weird philosophical ideas (02:35:28)Welfare longtermism and wisdom longtermism (02:53:23)Epistemic learned helplessness (03:03:10)Power-seeking AI (03:12:41)Rob’s outro (03:25:45)Producer: Keiran HarrisAudio mastering: Milo McGuire and Ben CordellTranscriptions: Katy Moore
undefined
Jun 7, 2023 • 2h 35min

Jeff Hawkins (Thousand Brains Theory)

Podcast: Machine Learning Street Talk (MLST) Episode: #59 - Jeff Hawkins (Thousand Brains Theory)Release date: 2021-09-03Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationPatreon: https://www.patreon.com/mlst The ultimate goal of neuroscience is to learn how the human brain gives rise to human intelligence and what it means to be intelligent. Understanding how the brain works is considered one of humanity’s greatest challenges.  Jeff Hawkins thinks that the reality we perceive is a kind of simulation, a hallucination, a confabulation. He thinks that our brains are a model reality based on thousands of information streams originating from the sensors in our body.  Critically - Hawkins doesn’t think there is just one model but rather; thousands.  Jeff has just released his new book, A thousand brains: a new theory of intelligence. It’s an inspiring and well-written book and I hope after watching this show; you will be inspired to read it too.  https://numenta.com/a-thousand-brains-by-jeff-hawkins/ https://numenta.com/blog/2019/01/16/the-thousand-brains-theory-of-intelligence/ Panel: Dr. Keith Duggar https://twitter.com/DoctorDuggar Connor Leahy https://twitter.com/npcollapse

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app