London Futurists

London Futurists
undefined
Aug 5, 2025 • 40min

Intellectual dark matter? A reputation trap? The case of cold fusion, with Jonah Messinger

Could the future see the emergence and adoption of a new field of engineering called nucleonics, in which the energy of nuclear fusion is accessed at relatively low temperatures, producing abundant clean safe energy? This kind of idea has been discussed since 1989, when the claims of cold fusion first received media attention. It is often assumed that the field quickly reached a dead-end, and that the only scientists who continue to study it are cranks. However, as we’ll hear in this episode, there may be good reasons to keep an open mind about a number of anomalous but promising results.Our guest is Jonah Messinger, who is a Winton Scholar and Ph.D. student at the Cavendish Laboratory of Physics at the University of Cambridge. Jonah is also a Research Affiliate at MIT, a Senior Energy Analyst at the Breakthrough Institute, and previously he was a Visiting Scientist and ThinkSwiss Scholar at ETH Zürich. His work has appeared in research journals, on the John Oliver show, and in publications of Columbia University. He earned his Master’s in Energy and Bachelor’s in Physics from the University of Illinois at Urbana-Champaign, where he was named to its Senior 100 Honorary.Selected follow-ups:Jonah Messinger (The Breakthrough Institute)nucleonics.orgU.S. Department of Energy Announces $10 Million in Funding to Projects Studying Low-Energy Nuclear Reactions (ARPA-E)How Anomalous Science Breaks Through - by Jonah MessingerWolfgang Pauli (Wikiquote)Cold fusion: A case study for scientific behavior (Understanding Science)Calculated fusion rates in isotopic hydrogen molecules - by SE Koonin & M NauenbergKnown mechanisms that increase nuclear fusion rates in the solid state - by Florian Metzler et alIntroduction to superradiance (Cold Fusion Blog)Peter L. Hagelstein - Professor at MITModels for nuclear fusion in the solid state - by Peter Hagelstein et alRisk and Scientific Reputation: Lessons from Cold Fusion - by Huw PriceKatalin Karikó (Wikipedia)“Abundance” and Its Insights for Policymakers - by Hadley BrownIdentifying intellectual dark matter - by Florian Metzler and Jonah MessingerMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
undefined
Jul 29, 2025 • 54min

AI agents, AI safety, and AI boycotts, with Peter Scott

This episode of London Futurists Podcast is a special joint production with the AI and You podcast which is hosted by Peter Scott. It features a three-way discussion, between Peter, Calum, and David, on the future of AI, with particular focus on AI agents, AI safety, and AI boycotts.Peter Scott is a futurist, speaker, and technology expert helping people master technological disruption. After receiving a Master’s degree in Computer Science from Cambridge University, he went to California to work for NASA’s Jet Propulsion Laboratory. His weekly podcast, “Artificial Intelligence and You” tackles three questions: What is AI? Why will it affect you? How do you and your business survive and thrive through the AI Revolution?Peter’s second book, also called “Artificial Intelligence and You,” was released in 2022. Peter works with schools to help them pivot their governance frameworks, curricula, and teaching methods to adapt to and leverage AI.Selected follow-ups:Artificial Intelligence and You (podcast)Making Sense of AI - Peter's personal websiteArtificial Intelligence and You (book)AI agent verification - ConsciumPreventing Zero-Click AI Threats: Insights from EchoLeak - TrendMicroFuture Crimes -  book by Marc GoodmanHow TikTok Serves Up Sex and Drug Videos to Minors - Washington PostCOVID-19 vaccine misinformation and hesitancy - WikipediaCambridge Analytica - WikipediaInvisible Rulers - book by Renée DiResta2025 Northern Ireland riots (Ballymena) - WikipediaGoogle DeepMind Slammed by Protesters Over Broken AI Safety PromiseMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
undefined
Jul 18, 2025 • 43min

The remarkable potential of hydrogen cars, with Hugo Spowers

The guest in this episode is Hugo Spowers. Hugo has led an adventurous life. In the 1970s and 80s he was an active member of the Dangerous Sports Club, which invented bungee jumping, inspired by an initiation ceremony in Vanuatu. Hugo skied down a black run in St.Moritz in formal dress, seated at a grand piano, and he broke his back, neck and hips when he misjudged the length of one of his bungee ropes.Hugo is a petrol head, and done more than his fair share of car racing. But if he’ll excuse the pun, his driving passion was always the environment, and he is one of the world’s most persistent and dedicated pioneers of hydrogen cars.He is co-founder and CEO of Riversimple, a 24 year-old pre-revenue startup, which have developed 5 generations of research vehicles. Hydrogen cars are powered by electric motors using electricity generated by fuel cells. Fuel cells are electrolysis in reverse. You put in hydrogen and oxygen, and what you get out is electricity and water.There is a long-standing debate among energy experts about the role of hydrogen fuel cells in the energy mix, and Hugo is a persuasive advocate. Riversimple’s cars carry modest sized fuel cells complemented by supercapacitors, with motors for each of the four wheels. The cars are made of composites, not steel, because minimising weight is critical for fuel efficiency, pollution, and road safety. The cars are leased rather than sold, which enables a circular business model, involving higher initial investment per car, and no built-in obsolescence. The initial, market entry cars are designed as local run-arounds for households with two cars, which means the fuelling network can be built out gradually. And Hugo also has strong opinions about company governance.Selected follow-ups:Hugo Spowers - WikipediaRiversimpleMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
undefined
Jun 23, 2025 • 39min

AI and the end of conflict, with Simon Horton

Can we use AI to improve how we handle conflict? Or even to end the worst conflicts that are happening all around us? That’s the subject of the new book of our guest in this episode, Simon Horton. The book has the bold title “The End of Conflict: How AI will end war and help us get on better”.Simon has a rich background, including being a stand-up comedian and a trapeze artist – which are, perhaps, two useful skills for dealing with acute conflict. He has taught negotiation and conflict resolution for 20 years, across 25 different countries, where his clients have included the British Army, the Saudi Space Agency, and Goldman Sachs. His previous books include “Change their minds” and “The leader’s guide to negotiation”.Selected follow-ups:Simon HortonThe End of Conflict - book websiteThe Better Angels of our Nature - book by Steven PinkerCrime in England and Wales: year ending March 2024 - UK Office of National StatisticsHow Martin McGuinness and Ian Paisley forged an unlikely friendship - Belfast TelegraphReview of Steven Pinker’s Enlightenment Now by Scott AaronsonA Detailed Critique of One Section of Steven Pinker’s Chapter “Existential Threats” by Philosophy TorresEnd Times: Elites, Counter-Elites, and the Path of Political Disintegration - book by Peter TurchinWhy do chimps kill each other? - ScienceUsing Artificial Intelligence in Peacemaking: The Libya Experience - Colin Irwin, University of LiverpoolRetrospective on the Oslo Accord - New York TimesRemeshPolis - Democracy TechnologiesWaves: Tech-Powered Democracy - DemosMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
undefined
Jun 11, 2025 • 49min

The AI disconnect: understanding vs motivation, with Nate Soares

Nate Soares, Executive Director of MIRI and a prominent voice in AI safety, shares his insights into the complexities of artificial intelligence. He discusses the risks surrounding AI alignment and the unsettling behavior observed in advanced models like GPT-01. Soares emphasizes the disconnect between AI motivations and human values, addressing the ethical dilemmas in developing superintelligent systems. He urges a proactive approach to managing potential threats, highlighting the need for global awareness and responsible advancements in AI technology.
undefined
May 28, 2025 • 41min

Anticipating an Einstein moment in the understanding of consciousness, with Henry Shevlin

Our guest in this episode is Henry Shevlin. Henry is the Associate Director of the Leverhulme Centre for the Future of Intelligence at the University of Cambridge, where he also co-directs the Kinds of Intelligence program and oversees educational initiatives. He researches the potential for machines to possess consciousness, the ethical ramifications of such developments, and the broader implications for our understanding of intelligence. In his 2024 paper, “Consciousness, Machines, and Moral Status,” Henry examines the recent rapid advancements in machine learning and the questions they raise about machine consciousness and moral status. He suggests that public attitudes towards artificial consciousness may change swiftly, as human-AI interactions become increasingly complex and intimate. He also warns that our tendency to anthropomorphise may lead to misplaced trust in and emotional attachment to AIs.Note: this episode is co-hosted by David and Will Millership, the CEO of a non-profit called Prism (Partnership for Research Into Sentient Machines). Prism is seeded by Conscium, a startup where both Calum and David are involved, and which, among other things, is researching the possibility and implications of machine consciousness. Will and Calum will be releasing a new Prism podcast focusing entirely on Conscious AI, and the first few episodes will be in collaboration with the London Futurists Podcast.Selected follow-ups:PRISM podcastHenry Shevlin - personal siteKinds of Intelligence - Leverhulme Centre for the Future of IntelligenceConsciousness, Machines, and Moral Status - 2024 paper by Henry ShevlinApply rich psychological terms in AI with care - by Henry Shevlin and Marta HalinaWhat insects can tell us about the origins of consciousness - by Andrew Barron and Colin KleinConsciousness in Artificial Intelligence: Insights from the Science of Consciousness - By Patrick Butlin, Robert Long, et alAssociation for the Study of ConsciousnessOther researchers mentioned:Blake LemoineThomas NagelNed BlockPeter SengeGalen StrawsonDavid ChalmersDavid BenatarThomas MetzingerBrian TomasikMurray ShanahanMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
undefined
May 9, 2025 • 37min

The case for a conditional AI safety treaty, with Otto Barten

How can a binding international treaty be agreed and put into practice, when many parties are strongly tempted to break the rules of the agreement, for commercial or military advantage, and when cheating may be hard to detect? That’s the dilemma we’ll examine in this episode, concerning possible treaties to govern the development and deployment of advanced AI.Our guest is Otto Barten, Director of the Existential Risk Observatory, which is based in the Netherlands but operates internationally. In November last year, Time magazine published an article by Otto, advocating what his organisation calls a Conditional AI Safety Treaty. In March this year, these ideas were expanded into a 34-page preprint which we’ll be discussing today, “International Agreements on AI Safety: Review and Recommendations for a Conditional AI Safety Treaty”.Before co-founding the Existential Risk Observatory in 2021, Otto had roles as a sustainable energy engineer, data scientist, and entrepreneur. He has a BSc in Theoretical Physics from the University of Groningen and an MSc in Sustainable Energy Technology from Delft University of Technology.Selected follow-ups:Existential Risk ObservatoryThere Is a Solution to AI’s Existential Risk Problem - TimeInternational Agreements on AI Safety: Review and Recommendations for a Conditional AI Safety Treaty - Otto Barten and colleaguesThe Precipice: Existential Risk and the Future of Humanity - book by Toby OrdGrand futures and existential risk - Lecture by Anders Sandberg in London attended by OttoPauseAIStopAIResponsible Scaling Policies - METRMeta warns of 'worse' experience for European users - BBC NewsAccidental Nuclear War: a Timeline of Close Calls - FLIThe Vulnerable World Hypothesis - Nick BostromSemiconductor Manufacturing Optics - ZeissCalifornia Institute for Machine ConsciousnessTipping point for large-scale social change? Just 25 percent - Penn TodayMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
undefined
Apr 30, 2025 • 48min

Humanity's final four years? with James Norris

In this episode, we return to the subject of existential risks, but with a focus on what actions can be taken to eliminate or reduce these risks.Our guest is James Norris, who describes himself on his website as an existential safety advocate. The website lists four primary organizations which he leads: the International AI Governance Alliance, Upgradable, the Center for Existential Safety, and Survival Sanctuaries.Previously, one of James' many successful initiatives was Effective Altruism Global, the international conference series for effective altruists. He also spent some time as the organizer of a kind of sibling organization to London Futurists, namely Bay Area Futurists. He graduated from the University of Texas at Austin with a triple major in psychology, sociology, and philosophy, as well as with minors in too many subjects to mention.Selected follow-ups:James Norris websiteUpgrade your life & legacy - UpgradableThe 7 Habits of Highly Effective People (Stephen Covey)Beneficial AI 2017 - Asilomar conference"...superintelligence in a few thousand days" - Sam Altman blogpostAmara's Law - DevIQThe Probability of Nuclear War (JFK estimate)AI Designs Chemical Weapons - The BatchThe Vulnerable World Hypothesis - Nick BostromWe Need To Build Trustworthy AI Systems To Monitor Other AI: Yoshua BengioInstrumental convergence - WikipediaNeanderthal extinction - WikipediaMatrioshka brain - WikipediaWill there be a 'WW3' before 2050? - Manifold prediction marketExistential Safety Action PledgeAn Urgent Call for Global AI Governance - IAIGA petitionBuild your survival sanctuaryOther people mentioned include:Eliezer Yudkowsky, Roman Yampolskiy, Yan LeCun, Andrew NgMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
undefined
Apr 23, 2025 • 42min

Human extinction: thinking the unthinkable, with Sean ÓhÉigeartaigh

Our subject in this episode may seem grim – it’s the potential extinction of the human species, either from a natural disaster, like a supervolcano or an asteroid, or from our own human activities, such as nuclear weapons, greenhouse gas emissions, engineered biopathogens, misaligned artificial intelligence, or high energy physics experiments causing a cataclysmic rupture in space and time.These scenarios aren’t pleasant to contemplate, but there’s a school of thought that urges us to take them seriously – to think about the unthinkable, in the phrase coined in 1962 by pioneering futurist Herman Kahn. Over the last couple of decades, few people have been thinking about the unthinkable more carefully and systematically than our guest today, Sean ÓhÉigeartaigh. Sean is the author of a recent summary article from Cambridge University Press that we’ll be discussing, “Extinction of the human species: What could cause it and how likely is it to occur?”Sean is presently based in Cambridge where he is a Programme Director at the Leverhulme Centre for the Future of Intelligence. Previously he was founding Executive Director of the Centre for the Study of Existential Risk, and before that, he managed research activities at the Future of Humanity Institute in Oxford.Selected follow-ups:Seán Ó hÉigeartaigh - Leverhulme Centre ProfileExtinction of the human species - by Sean ÓhÉigeartaighHerman Kahn - WikipediaMoral.me - by ConsciumClassifying global catastrophic risks - by Shahar Avin et alDefence in Depth Against Human Extinction - by Anders Sandberg et alThe Precipice - book by Toby OrdMeasuring AI Ability to Complete Long Tasks - by METRCold Takes - blog by Holden KarnofskyWhat Comes After the Paris AI Summit? - Article by SeanARC-AGI - by François CholletHenry Shevlin - Leverhulme Centre profileEleos (includes Rosie Campbell and Robert Long)NeurIPS talk by David ChalmersTrustworthy AI Systems To Monitor Other AI: Yoshua BengioThe Unilateralist’s Curse - by Nick Bostrom and Anders SandbergMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
undefined
Mar 26, 2025 • 45min

The best of times and the worst of times, updated, with Ramez Naam

Ramez Naam, a climate tech investor and award-winning author, shares insights on today's dual realities of prosperity and peril. He discusses significant advancements in clean energy technologies and the urgent climate challenges that remain. Ramez emphasizes the role of governance in navigating these issues. The conversation also covers innovative solutions like geoengineering and the complex landscape of AI's societal impact, as well as the intertwined fate of democracy and technology amid growing fears and inequalities.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app