
London Futurists
Anticipating and managing exponential impact - hosts David Wood and Calum ChaceCalum Chace is a sought-after keynote speaker and best-selling writer on artificial intelligence. He focuses on the medium- and long-term impact of AI on all of us, our societies and our economies. He advises companies and governments on AI policy.His non-fiction books on AI are Surviving AI, about superintelligence, and The Economic Singularity, about the future of jobs. Both are now in their third editions.He also wrote Pandora's Brain and Pandora’s Oracle, a pair of techno-thrillers about the first superintelligence. He is a regular contributor to magazines, newspapers, and radio.In the last decade, Calum has given over 150 talks in 20 countries on six continents. Videos of his talks, and lots of other materials are available at https://calumchace.com/.He is co-founder of a think tank focused on the future of jobs, called the Economic Singularity Foundation. The Foundation has published Stories from 2045, a collection of short stories written by its members.Before becoming a full-time writer and speaker, Calum had a 30-year career in journalism and in business, as a marketer, a strategy consultant and a CEO. He studied philosophy, politics, and economics at Oxford University, which confirmed his suspicion that science fiction is actually philosophy in fancy dress.David Wood is Chair of London Futurists, and is the author or lead editor of twelve books about the future, including The Singularity Principles, Vital Foresight, The Abolition of Aging, Smartphones and Beyond, and Sustainable Superabundance.He is also principal of the independent futurist consultancy and publisher Delta Wisdom, executive director of the Longevity Escape Velocity (LEV) Foundation, Foresight Advisor at SingularityNET, and a board director at the IEET (Institute for Ethics and Emerging Technologies). He regularly gives keynote talks around the world on how to prepare for radical disruption. See https://deltawisdom.com/.As a pioneer of the mobile computing and smartphone industry, he co-founded Symbian in 1998. By 2012, software written by his teams had been included as the operating system on 500 million smartphones.From 2010 to 2013, he was Technology Planning Lead (CTO) of Accenture Mobility, where he also co-led Accenture’s Mobility Health business initiative.Has an MA in Mathematics from Cambridge, where he also undertook doctoral research in the Philosophy of Science, and a DSc from the University of Westminster.
Latest episodes

Jan 11, 2023 • 33min
Assessing Quantum Computing, with Ignacio Cirac
Quantum computing is a tough subject to explain and discuss. As Niels Bohr put it, “Anyone who is not shocked by quantum theory has not understood it”. Richard Feynman helpfully added, “I think I can safely say that nobody understands quantum mechanics”.Quantum computing employs the weird properties of quantum mechanics like superposition and entanglement. Classical computing uses binary digits, or bits, which are either on or off. Quantum computing uses qubits, which can be both on and off at the same time, and this characteristic somehow makes them enormously more computationally powerful.Co-hosts Calum and David knew that to address this important but difficult subject, we needed an absolute expert, who was capable of explaining it in lay terms. When Calum heard Dr Ignacio Cirac give a talk on the subject in Madrid last month, he knew we had found our man.Ignacio is director of the Max Planck Institute of Quantum Optics in Germany, and holds honorary and visiting professorships pretty much everywhere that serious work is done on quantum physics. He has done seminal work on the trapped ion approach to quantum computing and several other aspects of the field, and has published almost 500 papers in prestigious journals. He is spoken of as a possible Nobel Prize winner.Topics discussed in this conversation include:*) A brief history of quantum computing (QC) from the 1990s to the present*) The kinds of computation where QC can out-perform classical computers*) Likely timescales for further progress in the field*) Potential quantum analogies of Moore's Law*) Physical qubits contrasted with logical qubits*) Reasons why errors often arise with qubits - and approaches to reducing these errors*) Different approaches to the hardware platforms of QC - and which are most likely to prove successful*) Ways in which academia can compete with (and complement) large technology companies*) The significance of "quantum supremacy" or "quantum advantage": what has been achieved already, and what might be achieved in the future*) The risks of a forthcoming "quantum computing winter", similar to the AI winters in which funding was reduced*) Other comparisons and connections between AI and QC*) The case for keeping an open mind, and for supporting diverse approaches, regarding QC platforms*) Assessing the threats posed by Shor's algorithm and fault-tolerant QC*) Why companies should already be considering changing the encryption systems that are intended to keep their data secure*) Advice on how companies can build and manage in-house "quantum teams"Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationSelected follow-up reading:https://en.wikipedia.org/wiki/Juan_Ignacio_Cirac_Sasturainhttps://en.wikipedia.org/wiki/Rydberg_atomPromoguy Talk PillsAgency in Amsterdam dives into topics like Tech, AI, digital marketing, and more drama...Listen on: Apple Podcasts Spotify

Jan 4, 2023 • 37min
Questioning the Fermi Paradox, with Anders Sandberg
In the summer of 1950, the physicist Enrico Fermi and some colleagues at the Los Alamos Lab in New Mexico were walking to lunch, and casually discussing flying saucers, when Fermi blurted out “But where is everybody?” He was not the first to pose the question, and the precise phrasing is disputed, but the mystery he was referring to remains compelling.We appear to live in a vast universe, with billions of galaxies, each with billions of stars, mostly surrounded by planets, including many like the Earth. The universe appears to be 13.7 billion years old, and even if intelligent life requires an Earth-like planet, and even if it can only travel and communicate at the speed of light, we ought to see lots of evidence of intelligent life. But we don’t. No beams of light from stars occluded by artificial satellites spelling out pi. No signs of galactic-scale engineering. No clear evidence of little green men demanding to meet our leaders.Numerous explanations have been advanced to explain this discrepancy, and one man who has spent more brainpower than most exploring them is the always-fascinating Anders Sandberg. Anders is a computational neuroscientist who got waylaid by philosophy, which he pursues at Oxford University, where he is a senior research fellow.Topics in this episode include:* The Drake equation for estimating the number of active, communicative extraterrestrial civilizations in our galaxy* Changes in recent decades in estimates of some of the factors in the Drake equation* The amount of time it would take self-replicating space probes to spread across the galaxy* The Dark Forest hypothesis - that all extraterrestrial civilizations are deliberately quiet, out of fear* The likelihood of extraterrestrial civilizations emitting observable signs of their existence, even if they try to suppress them* The implausibility of all extraterrestrial civilizations converging to the same set of practices, rather than at least some acting in ways where we would notice their existence - and a counter argument* The possibility of civilisations opting to spend all their time inside virtual reality computers located in deep interstellar space* The Aestivation hypothesis, in which extraterrestrial civilizations put themselves into a "pause" mode until the background temperature of the universe has become much lower* The Quarantine or Zoo hypothesis, in which extraterrestrial civilizations are deliberately shielding their existence from an immature civilization like ours* The Great Filter hypothesis, in which life on other planets has a high probability, either of failing to progress to the level of space-travel, or of failing to exist for long after attaining the ability to self-destruct* Possible examples of "great filters"* Should we hope to find signs of life on Mars?* The Simulation hypothesis, in which the universe is itself a kind of video game, created by simulators, who had no need (or lacked sufficient resources) to create more than one intelligent civilization* Implications of this discussion for the wisdom of the METI project - Messaging to Extraterrestrial IntelligenceSelected follow-up reading:* Anders' website at FHI Oxford: https://www.fhi.ox.ac.uk/team/anders-sandberg/* The Great Filter, by Robin Hanson: http://mason.gmu.edu/~rhanson/greatfilter.html* "Seventy-Five Solutions to the Fermi Paradox and the Problem of Extraterrestrial Life" - a book by Stephen Webb: https://link.springer.com/book/10.1007/978-3-3Promoguy Talk PillsAgency in Amsterdam dives into topics like Tech, AI, digital marketing, and more drama...Listen on: Apple Podcasts Spotify

Dec 28, 2022 • 32min
Enabling Extended Reality, with Steve Dann
An area of technology that has long been anticipated is Extended Reality (XR), which includes Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR). For many decades, researchers have developed various experimental headsets, glasses, gloves, and even immersive suits, to give wearers of these devices the impression of existing within a reality that is broader than what our senses usually perceive. More recently, a number of actual devices have come to the market, with, let's say it, mixed reactions. Some enthusiasts predict rapid improvements in the years ahead, whereas other reviewers focus on disappointing aspects of device performance and user experience.Our guest in this episode of London Futurists Podcast is someone widely respected as a wise guide in this rather turbulent area. He is Steve Dann, who among other roles is the lead organiser of the highly popular Augmenting Reality meetup in London.Topics discussed in this episode include:*) Steve's background in film and television special effects*) The different forms of Extended Reality*) Changes in public understanding of virtual and augmented reality*) What can be learned from past disappointments in this field*) Prospects for forthcoming tipping points in market adoption*) Comparisons with the market adoption of smartwatches and of smartphones*) Forecasting incremental improvements in key XR technologies*) Why "VR social media" won't be a sufficient reason for mass adoption of VR*) The need for compelling content*) The particular significance of enterprise use cases*) The potential uses of XR in training, especially for medical professionals*) Different AR and VR use cases in medical training - and different adoption timelines*) Why an alleged drawback of VR may prove to be a decisive advantage for it*) The likely forthcoming battle over words such as "metaverse"*) Why our future online experiences will increasingly be 3D*) Prospects for open standards between different metaverses*) Reasons for companies to avoid rushing to purchase real estate in metaverses*) Movies that portray XR, and the psychological perception of "what is real"*) Examples of powerful real-world consequences of VR experiences.Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationSelected follow-up reading:https://www.meetup.com/augmenting-reality/https://www.medicalrealities.com/aboutPromoguy Talk PillsAgency in Amsterdam dives into topics like Tech, AI, digital marketing, and more drama...Listen on: Apple Podcasts Spotify

Dec 21, 2022 • 34min
Governing the transition to AGI, with Jerome Glenn
Our guest on this episode is someone with excellent connections to the foresight departments of governments around the world. He is Jerome Glenn, Founder and Executive Director of the Millennium Project.The Millennium Project is a global participatory think tank established in 1996, which now has over 70 nodes around the world. It has the stated purpose to "Improve humanity's prospects for building a better world". The organisation produces regular "State of the Future" reports as well as updates on what it describes as "the 15 Global Challenges". It recently released an acclaimed report on three scenarios for the future of work. One of its new projects is the main topic in this episode, namely scenarios for the global governance of the transition from Artificial Narrow Intelligence (ANI) to Artificial General Intelligence (AGI).Topics discussed in this episode include:*) Why many futurists are jealous of Alvin Toffler*) The benefits of a decentralised, incremental approach to foresight studies*) Special features of the Millennium Project compared to other think tanks*) How the Information Revolution differs from the Industrial Revolution*) What is likely to happen if there is no governance of the transition to AGI*) Comparisons with regulating the use of cars - and the use of nuclear materials*) Options for licensing, auditing, and monitoring*) How the development of a technology may be governed even if it has few visible signs*) Three options: "Hope", "Control", and "Merge" - but all face problems; in all three cases, getting the initial conditions right could make a huge difference*) Distinctions between AGI and ASI (Artificial Superintelligence), and whether an ASI could act in defiance of its initial conditions*) Controlling AGI is likely to be impossible, but controlling the companies that are creating AGI is more credible*) How actions taken by the EU might influence decisions elsewhere in the world*) Options for "aligning" AGI as opposed to "controlling" it*) Complications with the use of advanced AI by organised crime and by rogue states*) The poor level of understanding of most political advisors about AGI, and their tendency to push discussions back to the issues of ANI*) Risks of catastrophic social destabilisation if "the mother of all panics" about AGI occurs on top of existing culture wars and political tribalism*) Past examples of progress with technologies that initially seemed impossible to govern*) The importance of taking some initial steps forward, rather than being overwhelmed by the scale of the challenge.Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationSelected follow-up reading:https://en.wikipedia.org/wiki/Jerome_C._Glennhttps://www.millennium-project.org/https://www.millennium-project.org/first-steps-for-artificial-general-intelligence-governance-study-have-begun/The 2020 book "After Shock: The World's Foremost Futurists Reflect on 50 Years of Future Shock - and Look Ahead to the Next 50"Promoguy Talk PillsAgency in Amsterdam dives into topics like Tech, AI, digital marketing, and more drama...Listen on: Apple Podcasts Spotify

Dec 14, 2022 • 30min
Introducing Decision Intelligence, with Steven Coates
This episode features the CEO of Brainnwave, Steven Coates, who is a pioneer in the field of Decision Intelligence.Decision Intelligence is the use of AI to enhance the ability of companies, organisations, or individuals to make key decisions - decisions about which new business opportunities to pursue, about evidence of possible leakage or waste, about the allocation of personnel to tasks, about geographical areas to target, and so on.What these decisions have in common is that they can all be improved by the analysis of large sets of data that defy attempts to reduce them to a single dimension. In these cases, AI systems that are suited to multi-dimensional analysis can make all the difference between wise and unwise decisions.Topics discussed in this episode include:*) The ideas initially pursued at Brainnwave, and how they evolved over time*) Real-world examples of Decision Intelligence - in the mining industry, the supply of mobile power generators, and in the oil industry*) Recommendations for businesses to focus on Decision Intelligence as they adopt fuller use of AI, on account of the direct impact on business outcomes*) Factors holding up the wider adoption of AI*) Challenges when "data lakes" turn into "data swamps"*) Challenges with the limits of trust that can be placed in data*) Challenges with the lack of trust in algorithms*) Skills in explaining how algorithms are reaching their decisions*) The benefits of an agile mindset in introducing Decision Intelligence.Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationSome follow-up reading:https://brainnwave.ai/Promoguy Talk PillsAgency in Amsterdam dives into topics like Tech, AI, digital marketing, and more drama...Listen on: Apple Podcasts Spotify

Dec 7, 2022 • 32min
Developing responsible AI, with Ray Eitel-Porter
As AI automates larger portions of the activities of companies and organisations, there's a greater need to think carefully about questions of privacy, bias, transparency, and explainability. Due to scale effects, mistakes made by AI and the automated analysis of data can have wide impacts. On the other hand, evidence of effective governance of AI development can deepen trust and accelerate the adoption of significant innovations.One person who has thought a great deal about these issues is Ray Eitel-Porter, Global Lead for Responsible AI at Accenture. In this episode of the London Futurist Podcast, he explains what conclusions he has reached.Topics discussed include:*) The meaning and importance of "Responsible AI"*) Connections and contrasts with "AI ethics" and "AI safety"*) The advantages of formal AI governance processes*) Recommendations for the operation of an AI ethics board*) Anticipating the operation of the EU's AI Act*) How different intuitions of fairness can produce divergent results*) Examples where transparency has been limited*) The potential future evolution of the discipline of Responsible AI.Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationSome follow-up reading:https://www.accenture.com/gb-en/services/applied-intelligence/ai-ethics-governancePromoguy Talk PillsAgency in Amsterdam dives into topics like Tech, AI, digital marketing, and more drama...Listen on: Apple Podcasts Spotify

Nov 30, 2022 • 31min
Anticipating Longevity Escape Velocity, with Aubrey de Grey
One area of technology that is frequently in the news these days is rejuvenation biotechnology, namely the possibility of undoing key aspects of biological aging via a suite of medical interventions. What these interventions target isn't individual diseases, such as cancer, stroke, or heart disease, but rather the common aggravating factors that lie behind the increasing prevalence of these diseases as we become older.Our guest in this episode is someone who has been at the forefront for over 20 years of a series of breakthrough initiatives in this field of rejuvenation biotechnology. He is Dr Aubrey de Grey, co-founder of the Methuselah Foundation, the SENS Research Foundation, and, most recently, the LEV Foundation - where 'LEV' stands for Longevity Escape Velocity.Topics discussed include:*) Different concepts of aging and damage repair;*) Why the outlook for damage repair is significantly more tangible today than it was ten years ago;*) The role of foundations in supporting projects which cannot receive funding from commercial ventures;*) Questions of pace of development: cautious versus bold;*) Changing timescales for the likely attainment of robust mouse rejuvenation ('RMR') and longevity escape velocity ('LEV');*) The "Less Death" initiative;*) "Anticipating anticipation" - preparing for likely sweeping changes in public attitude once understanding spreads about the forthcoming available of powerful rejuvenation treatments;*) Various advocacy initiatives that Aubrey is supporting;*) Ways in which listeners can help to accelerate the attainment of LEV.Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationSome follow-up reading:https://levf.orghttps://lessdeath.orgPromoguy Talk PillsAgency in Amsterdam dives into topics like Tech, AI, digital marketing, and more drama...Listen on: Apple Podcasts Spotify

Nov 23, 2022 • 33min
Expanding humanity's moral circle, with Jacy Reese Anthis
A Venn diagram of people interested in how AI will shape our future, and members of the effective altruism community (often abbreviated to EA), would show a lot of overlap. One of the rising stars in this overlap is our guest in this episode, the polymath Jacy Reese Anthis.Our discussion picks up themes from Jacy's 2018 book “The End of Animal Farming”, including an optimistic roadmap toward an animal-free food system, as well as factors that could alter that roadmap.We also hear about the work of an organisation co-founded by Jacy: the Sentience Institute, which researches - among other topics - the expansion of moral considerations to non-human entities. We discuss whether AIs can be sentient, how we might know if an AI is sentient, and whether the design choices made by developers of AI will influence the degree and type of sentience of AIs.The conversation concludes with some ideas about how various techniques can be used to boost personal effectiveness, and considers different ways in which people can relate to the EA community.Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationSome follow-up reading:https://www.sentienceinstitute.org/https://jacyanthis.com/Promoguy Talk PillsAgency in Amsterdam dives into topics like Tech, AI, digital marketing, and more drama...Listen on: Apple Podcasts Spotify

Nov 16, 2022 • 30min
Hacking the simulation, with Roman Yampolskiy
In the 4th century BC, the Greek philosopher Plato theorised that humans do not perceive the world as it really is. All we can see is shadows on a wall.In 2003, the Swedish philosopher Nick Bostrom published a paper which formalised an argument to prove Plato was right. The paper argued that one of the following three statements is true:1. We will go extinct fairly soon2. Advanced civilisations don’t produce simulations containing entities which think they are naturally-occurring sentient intelligences. (This could be because it is impossible.)3. We are in a simulation.The reason for this is that if it is possible, and civilisations can become advanced without exploding, then there will be vast numbers of simulations, and it is vanishingly unlikely that any randomly selected civilisation (like us) is a naturally-occurring one.Some people find this argument pretty convincing. As we will hear later, some of us have added twists to the argument. But some people go even further, and speculate about how we might bust out of the simulation.One such person is our friend and our guest in this episode, Roman Yampolskiy, Professor of Computer Science at the University of Louisville.Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationFurther reading:"How to Hack the Simulation" by Roman Yampolskiy: https://www.researchgate.net/publication/364811408_How_to_Hack_the_Simulation"The Simulation Argument" by Nick Bostrom: https://www.simulation-argument.com/Promoguy Talk PillsAgency in Amsterdam dives into topics like Tech, AI, digital marketing, and more drama...Listen on: Apple Podcasts Spotify

Nov 9, 2022 • 39min
Pioneering AI drug development, with Alex Zhavoronkov
This episode discusses progress at Insilico Medicine, the AI drug development company founded by our guest, longevity pioneer Alex Zhavoronkov.1.20 In Feb 2022, Insilico got an IPF drug into phase 1 clinical trials: a first for a wholly AI-developed drug1.50 Insilico is now well-funded; its software is widely used in the pharma industry2.30 How drug development works. First you create a hypothesis about what causes a disease4.00 Pandaomics is Insilico’s software to generate hypotheses. It combines 20+ AI models, and huge public data repositories6.00 This first phase is usually done in academia. It usually costs $ billions to develop a hypothesis. 95% of them fail6.50 The second phase is developing a molecule which might treat the disease7.15 This is the job of Insilico’s Chemistry 42 platform7.30 The classical approach is to test thousands of molecules to see if they bind to the target protein7.50 AI, by contrast, is able to "imagine" a novel molecule which might bind to it8.00 You then test 10-15 molecules which have the desired characteristics8.20 This is done with a variety of genetic algorithms, Generative Adversarial Networks (GANs), and some Transformer networks8.35 Insilico has a “zoo” of 40 validated models10.40 Given the ten-fold improvement, why hasn’t the whole drug industry adopted this process?10.50 They do all have AI groups and they are trying to change, but they are huge companies, and it takes time11.50 Is it better to invent new molecules, or re-purpose old drugs, which are already known to be safe in humans?13.00 You can’t gain IP with re-purposed drugs: either somebody else “owns” them, or they are already generic15.00 The IPF drug was identified during aging research, using aging clocks, and a deep neural net trained on longitudinal data17.10 The third phase is where Insilico’s other platform, InClinico, comes into play17.35 InClinico predicts the results of phase 2 (clinical efficacy) trials18.15 InClinico is trained on massive data sets about previous trials19.40 InClinico is actually Insilico’s oldest system. Its value has only been ascertained now that some drugs have made it all the way through the pipeline22.05 A major pharma company asked Insilico to predict the outcome of ten of its trials22.30 Nine of these ten trials were predicted correctly23.00 But the company decided that adopting this methodology would be too much of an upheaval; it was unwilling to rely on outsiders so heavily24.15 Hedge funds and banks have no such qualms24.25 Insilico is doing pilots for their investments in biotech startups26.30 Alex is from Latvia originally, studied in Canada, started his career in the US, but Insilico was established in Hong Kong. Why?27.00 Chinese CROs, Contract Research Organisations, enable you to do research without having your own wetlab 28.00 Like Apple, Insilico designs in the US and does operations in China. You can also do clinical studies there28.45 They needed their own people inside those CROs, so had to be co-located29.10 Hong Kong still has great IP protection, financial expertise, scientific resources, and is a beautiful place to live29.40 Post-Covid, Insilico also had to set up a site in Shanghai30.35 It is very frustrating how much opposition has built up against international co-operation32.00 Anti-globalisation ideas and attitudes are bad for longevity research, and all of biotech33.20 Insilico has all the data it Promoguy Talk PillsAgency in Amsterdam dives into topics like Tech, AI, digital marketing, and more drama...Listen on: Apple Podcasts Spotify
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.