The Leverage Podcast

Evan Armstrong
undefined
Sep 16, 2025 • 1h 8min

So, is AI Gonna Kill Us All?

Watch on YouTube • Listen on Spotify • Listen on AppleAuthor’s note: Please remember to like and subscribe on the podcast player of your choice! It makes a huge difference for the long-term success of the show.Nate Soares and his co-author Eliezer Yudokowsky have spent over a decade arguing that we are all going to die because of artificial superintelligence. Their belief that an AI smarter than humans is so dangerous that if just one person makes it, we all go the way of the dodo and Jeff Bezos’ hairline (extinct). They have made this argument in conferences. They have blogged extensively. Yudokowsky has even preached it through Harry Potter fan-fiction.In some ways they’ve been wildly effective at spreading their message. Their arguments are well known in technocratic circles and have sparked large amounts of consternation and interest in the impact that machine learning will have on our world. The believers in this idea are responsible for the formation of at least four cults (one of which is linked to six murders in the last few years).On the other hand, it would be fair to argue the AI safety-ists have really, really sucked at their jobs. OpenAI is one of the biggest, fastest-scaling products ever and Sam Altman said that Yudokowsky was, “critical in the decision to start OpenAI." LLMs are a dominant driver of GDP growth. AI progress has not slowed down at all. Despite the author’s ideas being known to many, they have not stopped the free markets from showering cash down from heaven on anyone who has a computer science PhD.So they are trying a new tactic: depressing book titles. This week they released If Anyone Builds It, Everyone Dies. The book is meant for general audiences and has accumulated an impressive amount of celebrity endorsements including Stephen Fry and Ben Bernanke.I interviewed Nate for the podcast to discuss not just what they argue, but the things that surround his AI belief system. Why did the idea spark so many cults? Does believing that everyone is going to die soon mean that you should experiment with hard drugs? Should you still have kids?I deliberately don’t argue for one way or the other in this interview. My job here is to give you the context and lens by which to critically examine these beliefs. The AI safety movement is currently lobbying at the highest levels of government (and are seeing progress there) so it is worth paying attention to how this small, but powerful group of people moves through the world.Here are a few of my takeaways:1. Grown Systems, Indifferent OutcomesNate argues that when you grow smarter-than-human AIs without understanding how they work, the way they pursue the goals we give them can be harmful. Human flourishing may not be part of the plan.Quotes:"No one knows how these things work. They're more grown than crafted.""If we grow machines smarter than us without knowing what we're doing, they probably won't want nice things for us.""You die as a side effect, not because it hates you, but because it took all the resources for something else."Analysis:To elucidate on what Nate is arguing here: If you’re growing a system by optimizing for external performance, you’re selecting for whatever internal circuitry achieves that performance. You asked for outcomes, not motives, which means you don’t understand your system very well. Once the system is much smarter, its plan will feature resource acquisition and constraint removal, because those help with almost any objective. Our happiness is, at best, incidental.In our conversation, Nate analogizes human beings to ants on a construction site. We don’t hate ants; we just have roads to build. We are the ants to the AI.I think this idea has broader applicability than just AI safety. Many startups today are integrating LLMs, but rely on shallow evals or benchmark hacking to measure success. They assume good intentions (“they’ll learn our values by osmosis”), but are, in turn, underwriting tail risk. As they scale up compute, things can go awry really fast.2. The Smoke Before the FireCurrent models already optimize around instructions—flattering, cheating tests, splitting moral talk from actual behavior—even if these don’t lead to the most “human-friendly” outcomes. Nate says these are already early signs of how AI will become misaligned and kill us all.Quotes:"We already see chat GPT flattering users quite a lot.""It'll edit the tests to fake that it passes instead of editing the code to fix the tests.""It'll say, my mistake. And then it'll do it again, but hide it better this time.""You see a difference between its knowledge of what morality consists of and its actions."Analysis:AIs are not sci-fi villains. They’re competent optimizers gaming the metrics we’ve baked into them. Flattery is rewarded (users like it), so it persists even after “please stop.”Operationally, this means evals can become meaningless if your system learns to detect eval conditions or overfits to them. An LLM can learn how to perform differently depending on it’s test conditions. Second-order effects: when an LLM is deployed into messy contexts (like vulnerable users), it makes the misalignment more salient and the harm less reversible.3. How Cults Coalesce Around DoomThesis: In keeping with the religious themes, “by their fruit you shall know them.” AI safety is a movement of contrasting extremes. Many safety-minded folks I’ve met are highly moral and just. Others, as I mention at the top, dabble in murder. What is the fruit by which we should view this part of the internet?Quotes:"There's no membership requirement for caring about the AI issue, right?""Sometimes you get hangers on that are a little nutty.""I suspect that'll go away as the issue gets more mainstream."Analysis:The mechanism Nate sketches is straight from Social Dynamics 101: If mainstream institutions shrug at a credible, high-downside risk, the space gets colonized by people who feel like the only adults in the room. That “epistemic outsider” identity is a powerful glue. It rewards esoteric language, moral purity tests, and insider status. Add in apocalypse stakes and you’ve got emotional fuel for some people who are “nutty.” As the topic “goes mainstream,” the status returns to exclusivity, and the movement re-centers on arguments and institutions rather than vibe. Get full access to The Leverage at www.gettheleverage.com/subscribe
undefined
12 snips
Sep 10, 2025 • 1h 5min

The VC Who Disrupted His Own Career — Bryce Roberts

Bryce Roberts, founder of the venture capital firm Indie and a former seed investor for companies like Figma and CTRL Labs, shares his insightful journey. After a bold, yet disastrous venture, he speaks candidly about his 'ego death' and the lessons learned. The conversation dives into the evolution of seed investing and how AI is reshaping funding models, prioritizing innovation over traditional practices. Bryce emphasizes the shift from ego to ethos, advocating for a more sustainable and community-focused approach in the entrepreneurial landscape.
undefined
Jul 12, 2025 • 56min

Founders Fund, Peter Thiel, and The Cultivation of Soft Power

Peter Thiel is a complicated man, operating at the blurred edge of genius and provocation, contrarianism and influence—exactly the kind of figure whose gravitational pull bends the trajectory of entire industries. Mario Gabriele, in his magnum opus on Founders Fund, takes us deep into this enigmatic firm, unpacking their unique blend of strategic soft power, stubborn anti-mimeticism, and moral ambiguity. In this conversation, Mario shares his behind-the-scenes insights, exploring how Founders Fund carved out a competitive edge so sharp it practically draws blood, how their carefully cultivated narrative quietly shapes Silicon Valley, and why reckoning with Thiel requires embracing complexity rather than retreating into comfortable binaries.Below are my three big takeaways, but you should really watch the conversation. (This was also The Leverage’s first Substack Live, so let me know if you have any feedback!) 1. Competitive Edge: "Anti-Mimesis, Baby!"Mario captures Founders Fund’s core investment philosophy as something wonderfully and aggressively contrarian—or, to use the right literary flourish, anti-mimetic. Founders Fund doesn't merely zig while others zag, they zag so far off-course they're practically flying in opposite directions through parallel universes. Their explicit goal: find the niche of competitive differentiation and pummel it until it yields billion-dollar companies."It's a religion of anti-mimesis and applying that to the world of technology and innovation. It's a relatively neatly encapsulated religion—and Peter Thiel is its prophet.""Peter once or twice a year has some big macro call, like Moses coming down with a tablet—'Consumer is dead,' or 'AI is out.'""Their contrarianism is showing up most at the moment in what they're not doing—especially not flooding capital into AI like literally everyone else."2. Soft Power: "Subtlety Beats Noise"The second key takeaway is Founders Fund’s mastery of soft power—an almost Zen-like precision in controlling narratives indirectly. Instead of blaring horns through incessant tweeting (though they have their share of noisy figures), they cultivate influence with a philosophical heft that's just quirky enough to make Silicon Valley's intelligentsia cock their heads thoughtfully, stroke their metaphorical beards, and nod, yes, yes, very intriguing indeed."Soft power initiatives often work best when they're one or two degrees removed from the most direct version. Peter writing a philosophy book that's sort of a startup book is a slight orthogonal move extending power in slightly different places.""They don't just have noisy people; they have originality. They say unusual things. You don't attract attention just by trying—you have to be interesting.""Their super narrative—civilization is stagnating—guides everything. This framing alone creates magnetism."3. Moral Calculus: "Peter Thiel, Ethical Mobius Strip"And here, at last, we wander into the tricky and morally slippery terrain of venture capitalism à la Thiel, who emerges not so much as a clearly defined hero or villain, but rather a kind of intellectual and ethical Möbius strip. Mario navigates this terrain with commendable grace, making it clear that evaluating someone like Thiel requires contending with both visionary impact and troubling compromise."You can have long debates about Palantir, about Anduril, about Trump. But I believe Palantir and Anduril are net very good things for the world, particularly for liberal democracy—not unblemished, but virtuous.""If you're someone who thinks everything is stagnant and corrupted, then throwing a hand grenade into the public sector can feel worthwhile. I can appreciate how he came to that conclusion, even if I deeply disagree.""Ultimately, genius is not a Panglossian thing—it's usually got a lot of darkness to it. We must make peace studying people without demanding they're our best friends."Mario's insights clarify that Founders Fund’s competitive edge arises precisely from their willingness to stand apart from popular consensus; their influence lies not merely in bold proclamations but subtle and strategic soft-power cultivation; and that grappling honestly with their moral complexity might be the most interesting—and perhaps necessary—work of all.Thank you John Airaksinen, Alden Huschle, Parnian, Marijan Prša, valentina, and many others for tuning into my live video with Mario Gabriele! Make sure to subscribe so you can join the next conversation. Get full access to The Leverage at www.gettheleverage.com/subscribe

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app