London Futurists

London Futurists
undefined
Dec 23, 2025 • 35min

The puzzle pieces that can defuse the US-China AI race dynamic, with Kayla Blomquist

Almost every serious discussion about options to constrain the development of advanced AI results in someone raising the question: “But what about China?” The worry behind this question is that slowing down AI research and development in the US and Europe will allow China to race ahead.It's true: the relationship between China and the rest of the world has many complications. That’s why we’re delighted that our guest in this episode is Kayla Blomquist, the Co-founder and Director of the Oxford China Policy Lab, or OCPL for short. OCPL describes itself as a global community of China and emerging technology researchers at Oxford, who produce policy-relevant research to navigate risks in the US-China relationship and beyond.In parallel with her role at OCPL, Kayla is pursuing a DPhil at the Oxford Internet Institute. She is a recent fellow at the Centre for Governance of AI, and the lead researcher and contributing author to the Oxford China Briefing Book. She holds an MSc from the Oxford Internet Institute and a BA with Honours in International Relations, Public Policy, and Mandarin Chinese from the University of Denver. She also studied at Peking University and is professionally fluent in Mandarin.Kayla previously worked as a diplomat in the U.S. Mission to China, where she specialized in the governance of emerging technologies, human rights, and improving the use of new technology within government services.Selected follow-ups:Kayla Blomquist - Personal siteOxford China Policy LabThe Oxford Internet Institute (OII)Google AI defeats human Go champion (Ke Jie)AI Safety Summit 2023 (Bletchley Park, UK)United Kingdom: Balancing Safety, Security, and Growth - OCPLChina wants to lead the world on AI regulation - report from APEC 2025China's WAICO proposal and the reordering of global AI governanceImpact of AI on cyber threat from now to 2027Options for the future of the global governance of AI - London Futurists WebinarA Tentative Draft of a Treaty - Online appendix to the book If Anyone Builds It, Everyone DiesAn International Agreement to Prevent the Premature Creation of Artificial SuperintelligenceMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationC-Suite PerspectivesElevate how you lead with insight from today’s most influential executives.Listen on: Apple Podcasts   Spotify
undefined
Dec 16, 2025 • 45min

Jensen Huang and the zero billion dollar market, with Stephen Witt

Our guest in this episode is Stephen Witt, an American journalist and author who writes about the people driving the technological revolutions. He is a regular contributor to The New Yorker, and is famous for deep-dive investigations.Stephen's new book is "The Thinking Machine: Jensen Huang, Nvidia, and the World's Most Coveted Microchip", which has just won the 2025 Financial Times and Schroders Business Book of the Year Award. It is a definitive account of the rise of Nvidia, from its foundation in a Denny's restaurant in 1993 as a video game component manufacturer, to becoming the world's most valuable company, and the hardware provider for the current AI boom.Stephen's previous book, “How Music Got Free”, is a history of music piracy and the MP3, and was also a finalist for the FT Business Book of the Year.Selected follow-ups:Stephen Witt - personal siteArticles by Stephen Witt on The New YorkerThe Thinking Machine: Jensen Huang, Nvidia, and the World's Most Coveted Microchip - book siteStephen Witt wins FT and Schroders Business Book of the Year - Financial TimesNvidia ExecutivesBattle Royale (Japanese film) - IMDbThe Economic Singularity - book by Calum ChaceA Cubic Millimeter of a Human Brain Has Been Mapped in Spectacular Detail - NatureNotebookLM - by GoogleMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationC-Suite PerspectivesElevate how you lead with insight from today’s most influential executives.Listen on: Apple Podcasts   Spotify
undefined
Dec 5, 2025 • 44min

What's your p(Pause)? with Holly Elmore

Our guest in this episode is Holly Elmore, who is the Founder and Executive Director of PauseAI US. The website pauseai-us.org starts with this headline: “Our proposal is simple: Don’t build powerful AI systems until we know how to keep them safe. Pause AI.”But PauseAI isn’t just a talking shop. They’re probably best known for organising public protests. The UK group has demonstrated in Parliament Square in London, with Big Ben in the background, and also outside the offices of Google DeepMind. A group of 30 PauseAI protesters gathered outside the OpenAI headquarters in San Francisco. Other protests have taken place in New York, Portland, Ottawa, Sao Paulo, Berlin, Paris, Rome, Oslo, Stockholm, and Sydney, among other cities.Previously, Holly was a researcher at the think tank Rethink Priorities in the area of Wild Animal Welfare. And before that, she studied evolutionary biology in Harvard’s Organismic and Evolutionary Biology department.Selected follow-ups:Holly Elmore - substackPauseAI USPauseAI - global siteWild Animal Suffering... and why it mattersHard problem of consciousness - WikipediaThe Unproven (And Unprovable) Case For Net Wild Animal Suffering. A Reply To Tomasik - by Michael PlantLeading Evolution Compassionately - Herbivorize PredatorsDavid Pearce (philosopher) - WikipediaThe AI industry is racing toward a precipice - Machine Intelligence Research Institute (MIRI)Nick Bostrom's new views regarding AI/AI safety - redditAI is poised to remake the world; Help us ensure it benefits all of us - Future of Life InstituteOn being wrong about AI - by Scott Aharonson, on his previous suggestion that it might take "a few thousand years" to reach superhuman AICalifornia Institute of Machine Consciousness - organisation founded by Joscha BachPausing AI is the only safe approach to digital sentience - article by Holly ElmoreCrossing the Chasm: Marketing and Selling High-Tech Products to Mainstream Customers - book by Geoffrey MooreMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationC-Suite PerspectivesElevate how you lead with insight from today’s most influential executives.Listen on: Apple Podcasts   Spotify
undefined
Oct 31, 2025 • 40min

Real-life superheroes and troubled institutions, with Tom Ough

Popular movies sometimes feature leagues of superheroes who are ready to defend the Earth against catastrophe. In this episode, we’re going to be discussing some real-life superheroes, as chronicled in the new book by our guest, Tom Ough. The book is entitled “The Anti-Catastrophe League: The Pioneers And Visionaries On A Quest To Save The World”. Some of these heroes are already reasonably well known, but others were new to David, and, he suspects, to many of the book’s readers.Tom is a London-based journalist. Earlier in his career he worked in newspapers, mostly for the Telegraph, where he was a staff feature-writer and commissioning editor. He is currently a senior editor at UnHerd, where he commissions essays and occasionally writes them. Perhaps one reason why he writes so well is that he has a BA in English Language and Literature from Oxford University, where he was a Casberd scholar.Selected follow-ups:About Tom OughThe Anti-Catastrophe League - The book's webpageOn novel methods of pandemic preventionWhat is effective altruism? (EA)Sam Bankman-Fried - Wikipedia (also covers FTX)Open PhilanthropyConsciumHere Comes the Sun - book by Bill McKibbenThe 10 Best Beatles Songs (Based on Streams)Carrington Event - WikipediaMirror life - WikipediaFuture of Humanity Institute 2005-2024: final report - by Anders SandbergOxford FHI Global Catastrophic Risks - FHI Conference, 2008ForethoughtReview of Nick Bostrom’s Deep Utopia - by CalumDeepMind and OpenAI claim gold in International Mathematical OlympiadWhat the Heck is Hubble Tension?The Decade Ahead - by Leopold AschenbrennerAI 2027AnglofuturismMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationC-Suite PerspectivesElevate how you lead with insight from today’s most influential executives.Listen on: Apple Podcasts   Spotify
undefined
Oct 10, 2025 • 42min

Safe superintelligence via a community of AIs and humans, with Craig Kaplan

Craig Kaplan has been thinking about superintelligence longer than most. He bought the URL superintelligence.com back in 2006, and many years before that, in the late 1980s, he co-authored a series of papers with one of the founding fathers of AI, Herbert Simon.Craig started his career as a scientist with IBM, and later founded and ran a venture-backed company called PredictWallStreet that brought the wisdom of the crowd to Wall Street, and improved the performance of leading hedge funds. He sold that company in 2020, and now spends his time working out how to make the first superintelligence safe. As he puts it, he wants to reduce P(Doom) and increase P(Zoom).Selected follow-ups:iQ CompanySuperintelligence - by iQ CompanyHerbert A. Simon - WikipediaAmara’s Law and Its Place in the Future of Tech - Pohan LinThe Society of Mind - book by Marvin MinskyAI 'godfather' Geoffrey Hinton warns of dangers as he quits Google - BBC NewsStatement on AI Risk - Center for AI SafetyI’ve Spent My Life Measuring Risk. AI Rings Every One of My Alarm Bells - Paul Tudor JonesSecrets of Software Quality: 40 Innovations from IBM - book by Craig KaplanLondon Futurists Podcast episode featuring David BrinReason in human affairs - book by Herbert SimonUS and China will intervene to halt ‘suicide race’ of AGI – Max TegmarkIf Anybody Builds It, Everyone Dies - book by Eliezer Yudkowsky and Nate SoaresAGI-25 - conference in ReykjavikThe First Global Brain Workshop - Brussels 2001Center for Integrated CognitionPaul S. RosenbloomTatiana Shavrina, MetaHenry Minsky launches AI startup inspired by father’s MIT researchMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationC-Suite PerspectivesElevate how you lead with insight from today’s most influential executives.Listen on: Apple Podcasts   Spotify
undefined
Sep 17, 2025 • 38min

How progress ends: the fate of nations, with Carl Benedikt Frey

Many people expect improvements in technology over the next few years, but fewer people are optimistic about improvements in the economy. Especially in Europe, there’s a narrative that productivity has stalled, that the welfare state is over-stretched, and that the regions of the world where innovation will be rewarded are the US and China – although there are lots of disagreements about which of these two countries will gain the upper hand.To discuss these topics, our guest in this episode is Carl Benedikt Frey, the Dieter Schwarz Associate Professor of AI & Work at the Oxford Internet Institute. Carl is also a Fellow at Mansfield College, University of Oxford, and is Director of the Future of Work Programme and Oxford Martin Citi Fellow at the Oxford Martin School.Carl’s new book has the ominous title, “How Progress Ends”. The subtitle is “Technology, Innovation, and the Fate of Nations”. A central premise of the book is that our ability to think clearly about the possibilities for progress and stagnation today is enhanced by looking backward at the rise and fall of nations around the globe over the past thousand years. The book contains fascinating analyses of how countries at various times made significant progress, and at other times stagnated. The book also considers what we might deduce about the possible futures of different economies worldwide.Selected follow-ups:Professor Carl-Benedikt Frey - Oxford Martin SchoolHow Progress Ends: Technology, Innovation, and the Fate of Nations - Princeton University PressStop Acting Like This Is Normal - Ezra Klein ("Stop Funding Trump’s Takeover")OpenAI o3 Breakthrough High Score on ARC-AGI-PubA Human Amateur Beat a Top Go-Playing AI Using a Simple Trick - ViceThe future of employment: How susceptible are jobs to computerisation? - Carl Benedikt Frey and Michael A. OsborneEurope's Choice: Policies for Growth and Resilience - Alfred Kammer, IMFMIT Radiation Laboratory ("Rad Lab")Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationC-Suite PerspectivesElevate how you lead with insight from today’s most influential executives.Listen on: Apple Podcasts   Spotify
undefined
Sep 8, 2025 • 37min

Tsetlin Machines, Literal Labs, and the future of AI, with Noel Hurley

Our guest in this episode is Noel Hurley. Noel is a highly experienced technology strategist with a long career at the cutting edge of computing. He spent two decade-long stints at Arm, the semiconductor company whose processor designs power hundreds of billions of devices worldwide. Today, he’s a co-founder of Literal Labs, where he’s developing Tsetlin Machines. Named after Michael Tsetlin, a Soviet mathematician, these are a kind of machine learning model that are energy-efficient, flexible, and surprisingly effective at solving complex problems - without the opacity or computational overhead of large neural networks.AI has long had two main camps, or tribes. One camp works with neural networks, including Large Language Models. Neural networks are brilliant at pattern matching, and can be compared to human instinct, or fast thinking, to use Daniel Kahneman´s terminology. Neural nets have been dominant since the first Big Bang in AI in 2012, when Geoff Hinton and others demonstrated the foundations for deep learning.For decades before the 2012 Big Bang, the predominant form of AI was symbolic AI, also known as Good Old Fashioned AI. This can be compared to logical reasoning, or slow learning in Kahneman´s terminology.Tsetlin Machines have characteristics of both neural networks and symbolic AI. They are rule-based learning systems built from simple automata, not from neurons or weights. But their learning mechanism is statistical and adaptive, more like machine learning than traditional symbolic AI. Selected follow-ups:Noel Hurley - Literal LabsA New Generation of Artificial Intelligence - Literal LabsMichael Tsetlin - WikipediaThinking, Fast and Slow - book by Daniel Kahneman54x faster, 52x less energy - MLPerf Inference metricsIntroducing the Model Context Protocol (MCP) - AnthropicPioneering Safe, Efficient AI - ConsciumSmartphones and Beyond - a personal history of Psion and SymbianThe Official History of Arm - ArmInterview with Sir Robin Saxby - IT ArchiveHow Spotify came to be worth billions - BBCMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationC-Suite PerspectivesElevate how you lead with insight from today’s most influential executives.Listen on: Apple Podcasts   Spotify
undefined
Aug 5, 2025 • 41min

Intellectual dark matter? A reputation trap? The case of cold fusion, with Jonah Messinger

Could the future see the emergence and adoption of a new field of engineering called nucleonics, in which the energy of nuclear fusion is accessed at relatively low temperatures, producing abundant clean safe energy? This kind of idea has been discussed since 1989, when the claims of cold fusion first received media attention. It is often assumed that the field quickly reached a dead-end, and that the only scientists who continue to study it are cranks. However, as we’ll hear in this episode, there may be good reasons to keep an open mind about a number of anomalous but promising results.Our guest is Jonah Messinger, who is a Winton Scholar and Ph.D. student at the Cavendish Laboratory of Physics at the University of Cambridge. Jonah is also a Research Affiliate at MIT, a Senior Energy Analyst at the Breakthrough Institute, and previously he was a Visiting Scientist and ThinkSwiss Scholar at ETH Zürich. His work has appeared in research journals, on the John Oliver show, and in publications of Columbia University. He earned his Master’s in Energy and Bachelor’s in Physics from the University of Illinois at Urbana-Champaign, where he was named to its Senior 100 Honorary.Selected follow-ups:Jonah Messinger (The Breakthrough Institute)nucleonics.orgU.S. Department of Energy Announces $10 Million in Funding to Projects Studying Low-Energy Nuclear Reactions (ARPA-E)How Anomalous Science Breaks Through - by Jonah MessingerWolfgang Pauli (Wikiquote)Cold fusion: A case study for scientific behavior (Understanding Science)Calculated fusion rates in isotopic hydrogen molecules - by SE Koonin & M NauenbergKnown mechanisms that increase nuclear fusion rates in the solid state - by Florian Metzler et alIntroduction to superradiance (Cold Fusion Blog)Peter L. Hagelstein - Professor at MITModels for nuclear fusion in the solid state - by Peter Hagelstein et alRisk and Scientific Reputation: Lessons from Cold Fusion - by Huw PriceKatalin Karikó (Wikipedia)“Abundance” and Its Insights for Policymakers - by Hadley BrownIdentifying intellectual dark matter - by Florian Metzler and JonahC-Suite PerspectivesElevate how you lead with insight from today’s most influential executives.Listen on: Apple Podcasts   Spotify
undefined
Jul 29, 2025 • 54min

AI agents, AI safety, and AI boycotts, with Peter Scott

This episode of London Futurists Podcast is a special joint production with the AI and You podcast which is hosted by Peter Scott. It features a three-way discussion, between Peter, Calum, and David, on the future of AI, with particular focus on AI agents, AI safety, and AI boycotts.Peter Scott is a futurist, speaker, and technology expert helping people master technological disruption. After receiving a Master’s degree in Computer Science from Cambridge University, he went to California to work for NASA’s Jet Propulsion Laboratory. His weekly podcast, “Artificial Intelligence and You” tackles three questions: What is AI? Why will it affect you? How do you and your business survive and thrive through the AI Revolution?Peter’s second book, also called “Artificial Intelligence and You,” was released in 2022. Peter works with schools to help them pivot their governance frameworks, curricula, and teaching methods to adapt to and leverage AI.Selected follow-ups:Artificial Intelligence and You (podcast)Making Sense of AI - Peter's personal websiteArtificial Intelligence and You (book)AI agent verification - ConsciumPreventing Zero-Click AI Threats: Insights from EchoLeak - TrendMicroFuture Crimes -  book by Marc GoodmanHow TikTok Serves Up Sex and Drug Videos to Minors - Washington PostCOVID-19 vaccine misinformation and hesitancy - WikipediaCambridge Analytica - WikipediaInvisible Rulers - book by Renée DiResta2025 Northern Ireland riots (Ballymena) - WikipediaGoogle DeepMind Slammed by Protesters Over Broken AI Safety PromiseMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationC-Suite PerspectivesElevate how you lead with insight from today’s most influential executives.Listen on: Apple Podcasts   Spotify
undefined
Jul 18, 2025 • 44min

The remarkable potential of hydrogen cars, with Hugo Spowers

The guest in this episode is Hugo Spowers. Hugo has led an adventurous life. In the 1970s and 80s he was an active member of the Dangerous Sports Club, which invented bungee jumping, inspired by an initiation ceremony in Vanuatu. Hugo skied down a black run in St.Moritz in formal dress, seated at a grand piano, and he broke his back, neck and hips when he misjudged the length of one of his bungee ropes.Hugo is a petrol head, and done more than his fair share of car racing. But if he’ll excuse the pun, his driving passion was always the environment, and he is one of the world’s most persistent and dedicated pioneers of hydrogen cars.He is co-founder and CEO of Riversimple, a 24 year-old pre-revenue startup, which have developed 5 generations of research vehicles. Hydrogen cars are powered by electric motors using electricity generated by fuel cells. Fuel cells are electrolysis in reverse. You put in hydrogen and oxygen, and what you get out is electricity and water.There is a long-standing debate among energy experts about the role of hydrogen fuel cells in the energy mix, and Hugo is a persuasive advocate. Riversimple’s cars carry modest sized fuel cells complemented by supercapacitors, with motors for each of the four wheels. The cars are made of composites, not steel, because minimising weight is critical for fuel efficiency, pollution, and road safety. The cars are leased rather than sold, which enables a circular business model, involving higher initial investment per car, and no built-in obsolescence. The initial, market entry cars are designed as local run-arounds for households with two cars, which means the fuelling network can be built out gradually. And Hugo also has strong opinions about company governance.Selected follow-ups:Hugo Spowers - WikipediaRiversimpleMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationC-Suite PerspectivesElevate how you lead with insight from today’s most influential executives.Listen on: Apple Podcasts   Spotify

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app