

London Futurists
London Futurists
Anticipating and managing exponential impact - hosts David Wood and Calum ChaceCalum Chace is a sought-after keynote speaker and best-selling writer on artificial intelligence. He focuses on the medium- and long-term impact of AI on all of us, our societies and our economies. He advises companies and governments on AI policy.His non-fiction books on AI are Surviving AI, about superintelligence, and The Economic Singularity, about the future of jobs. Both are now in their third editions.He also wrote Pandora's Brain and Pandora’s Oracle, a pair of techno-thrillers about the first superintelligence. He is a regular contributor to magazines, newspapers, and radio.In the last decade, Calum has given over 150 talks in 20 countries on six continents. Videos of his talks, and lots of other materials are available at https://calumchace.com/.He is co-founder of a think tank focused on the future of jobs, called the Economic Singularity Foundation. The Foundation has published Stories from 2045, a collection of short stories written by its members.Before becoming a full-time writer and speaker, Calum had a 30-year career in journalism and in business, as a marketer, a strategy consultant and a CEO. He studied philosophy, politics, and economics at Oxford University, which confirmed his suspicion that science fiction is actually philosophy in fancy dress.David Wood is Chair of London Futurists, and is the author or lead editor of twelve books about the future, including The Singularity Principles, Vital Foresight, The Abolition of Aging, Smartphones and Beyond, and Sustainable Superabundance.He is also principal of the independent futurist consultancy and publisher Delta Wisdom, executive director of the Longevity Escape Velocity (LEV) Foundation, Foresight Advisor at SingularityNET, and a board director at the IEET (Institute for Ethics and Emerging Technologies). He regularly gives keynote talks around the world on how to prepare for radical disruption. See https://deltawisdom.com/.As a pioneer of the mobile computing and smartphone industry, he co-founded Symbian in 1998. By 2012, software written by his teams had been included as the operating system on 500 million smartphones.From 2010 to 2013, he was Technology Planning Lead (CTO) of Accenture Mobility, where he also co-led Accenture’s Mobility Health business initiative.Has an MA in Mathematics from Cambridge, where he also undertook doctoral research in the Philosophy of Science, and a DSc from the University of Westminster.
Episodes
Mentioned books

May 10, 2023 • 43min
How to use GPT-4 yourself, with Ted Lappas
The last few episodes of our podcast have explored what GPT (generative pre-trained transformer) technology is and how it works, and also the call for a pause in the development of advanced AI. In this latest episode, Ted Lappas, a data scientist and academic, helps us to take a pragmatic turn - to understand what GPT technology can do for each of us individually.Ted is Assistant Professor at Athens University of Economics and Business, and he also works at Satalia, which was London's largest independent AI consultancy before it was acquired last year by the media giant WPP.Follow-up reading:https://satalia.com/https://www.linkedin.com/in/theodoros-lappas-82771451/Topics addressed in this episode include:*) The "GPT paradox": If GPT-4 is so good, why aren't more people using it to boost their effectiveness in their workplace?*) Concerns in some companies that data entered into GPTs will leak out and assist their competitors*) Uses of GPTs to create or manipulate text, and to help developers to understand new code*) GPTs as "brains" that lack the "limbs" that would make them truly useful*) GPT capabilities are being augmented via plug-ins that access sites like Expedia, Instacart, or Zapier*) Agent-based systems such as AutoGPT and AgentGPT that utilise GPTs to break down tasks into steps and then carry out these steps*) Comparison with the boost given to Apple iPhone adoption by the launch, one year later, of the iOS App Store*) Ted's use of GPT-4 in his role as a meta-reviewer for papers submitted to an academic conference - with Ted becoming an orchestrator more than a writer*) The learning curve is easier for vanilla GPTs than for agent systems that use GPTs*) GPTs are currently more suited to low-end writing than to high-end writing, but are expected to move up the value chain*) Ways to configure a GPT so that it can reproduce the quality level or textual style of a specific writer*) Calum's use of GPT-4 in his side-project as a travel writer*) Ways to stop GPTs inventing false anecdotes*) Some users of GPTs will lose all faith in them due to just a single hallucination*) Teaching GPTs to say "I don't know" or to state their level of confidence about claims they make*) Creating an embedding space search engine*) The case for gaining a working knowledge of the programming language Python*) The growth of technology-explainer videos on TikTok and Instagram*) "Explain this to me like I'm ten years old"*) The way to learn more about GPTs is to use them in a meaningful project*) Learning about GPTs such as DALL-E or Midjourney that generate not text but images*) Uses of GPTs for inpainting - blending new features into an image*) The advantages of open source tools, such as those available on Hugging Face*) Images will be largely solved in 2023; 2024 will be the year for video*) An appeal to "dive in, the sooner the better"Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationDigital Disruption with Geoff Nielson Discover how technology is reshaping our lives and livelihoods.Listen on: Apple Podcasts Spotify

May 3, 2023 • 34min
GPT: To ban or not to ban, that is the question
On March 14th, OpenAI launched GPT-4 , which took the world by surprise and storm. Almost everybody, including people within the AI community, was stunned by its capabilities. A week later, the Future of Life Institute (FLI) published an open letter calling on the world’s AI labs to pause the development of larger versions of GPT (generative pre-trained transformer) models until their safety can be ensured.Recent episodes of this podcast have presented arguments for and against this call for a moratorium. Jaan Tallin, one of the co-founders of FLI, made the case in favour. Pedro Domingos, an eminent AI researcher, and Kenn Cukier, a senior editor at The Economist, made variants of the case against. In this episode, co-hosts Calum Chace and David Wood highlight some key implications and give our own opinions. Expect some friendly disagreements along the way.Follow-up reading:https://futureoflife.org/open-letter/pause-giant-ai-experiments/https://www.metaculus.com/questions/3479/date-weakly-general-ai-is-publicly-known/Topics addressed in this episode include:*) Definitions of Artificial General Intelligence (AGI)*) Many analysts knowledgeable about AI have recently brought forward their estimates of when AGI will become a reality*) The case that AGI poses an existential risk to humanity*) The continued survival of the second smartest species on the planet depends entirely on the actions of the actual smartest species*) One species can cause another to become extinct, without that outcome being intended or planned*) Four different ways in which advanced AI could have terrible consequences for humanity: bugs in the implementation; the implementation being hacked (or jail broken); bugs in the design; and the design being hacked by emergent new motivations*) Near future AIs that still fall short of being AGI could have effects which, whilst not themselves existential, would plunge society into such a state of dysfunction and distraction that we are unable to prevent subsequent AGI-induced disaster*) Calum's "4 C's" categorisation of possible outcomes regarding AGI existential risks: Cease, Control, Catastrophe, and Consent*) 'Consent' means a superintelligence decides that we humans are fun, enjoyable, interesting, worthwhile, or simply unobjectionable, and consents to let us carry on as we are, or to help us, or to allow us to merge with it*) The 'Control' option arguably splits into "control while AI capabilities continue to proceed at full speed" and "control with the help of a temporary pause in the development of AI capabilities"*) Growing public support for stopping AI development - driven by a sense of outrage that the future of humanity is seemingly being decided by a small number of AI lab executives*) A comparison with how the 1983 film "The Day After" triggered a dramatic change in public opinion regarding the nuclear weapons arms race*) How much practical value could there be in a six-month pause? Or will the six-months be extended into an indefinite ban?*) Areas where there could be at least some progress: methods to validate the output of giant AI models, and choices of initial configurations that would make the 'Consent' scenario more likely*) Designs that might avoid the emergence of agency (convergent instrumental goals) within AI models as they acquire more intelligence*) Why 'Consent' might be the most likely outcome*) The longer a ban remains in place, the larger Digital Disruption with Geoff Nielson Discover how technology is reshaping our lives and livelihoods.Listen on: Apple Podcasts Spotify

9 snips
Apr 26, 2023 • 30min
The AI suicide race, with Jaan Tallinn
The race to create advanced AI is becoming a suicide race. That's part of the thinking behind the open letter from the Future of Life Institute which "calls on all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4".In this episode, our guest, Jaan Tallinn, explains why he sees this pause as a particularly important initiative.In the 1990s and 20-noughts, Jaan led much of the software engineering for the file-sharing application Kazaa and the online communications tool Skype. He is also known as one of the earliest investors in DeepMind, before they were acquired by Google.More recently, Jaan has been a prominent advocate for study of existential risks, including the risks from artificial superintelligence. He helped set up the Centre for the Study of Existential Risk (CSER) in 2012 and the Future of Life Institute (FLI) in 2014.Follow-up reading:https://futureoflife.org/open-letter/pause-giant-ai-experiments/https://www.cser.ac.uk/https://en.wikipedia.org/wiki/Jaan_TallinnTopics addressed in this episode include:*) The differences between CSER and FLI*) Do the probabilities for the occurrence of different existential risks vary by orders of magnitude?*) The principle that "arguments screen authority"*) The possibility that GPT-6 will be built, not by humans, but by GPT-5*) Growing public concern, all over the world, that the fate of all humanity is, in effect, being decided by the actions of just a small number of people in AI labs*) Two reasons why FLI recently changed its approach to AI risk*) The AI safety conference in 2015 in Puerto Rico was initially viewed as a massive success, but it has had little lasting impact*) Uncertainty about a potential cataclysmic event doesn't entitle people to conclude it won't happen any time soon*) The argument that LLMs (Large Language Models) are an "off ramp" rather than being on the road to AGI*) Why the duration of 6 months was selected for the proposed pause*) The "What about China?" objection to the pause*) Potential concrete steps that could take place during the pause*) The FLI document "Policymaking in the pause"*) The article by Luke Muehlhauser of Open Philanthropy, "12 tentative ideas for US AI policy"*) The "summon and tame" way of thinking about the creation of LLMs - and the risk that minds summoned in this way won't be able to be tamed*) Scenarios in which the pause might be ignored by various entities, such as authoritarian regimes, organised crime, rogue corporations, and extraordinary individuals such as Elon Musk and John Carmack*) A meta-principle for deciding which types of AI research should be paused*) 100 million dollar projects become even harder when they are illegal*) The case for requiring the pre-registration of largescale mind-summoning experiments*) A possible 10^25 limit on the number of FLOPs (Floating Point Operations) an AI model can spend*) The reactions by AI lab leaders to the widescale public response to GPT-4 and to the pause letter*) Even Sundar Pichai, CEO of Google/Alphabet, has called for government intervention regarding AI*) The hardware overhang complication with the pause*) Not letting "the perfect" be "the enemy of the good"*) Elon Musk's involvement with FLI and with the pause letter*) "Humanity now has cancer"Music: Spike Protein, by Koi DiscoveryDigital Disruption with Geoff Nielson Discover how technology is reshaping our lives and livelihoods.Listen on: Apple Podcasts Spotify

15 snips
Apr 19, 2023 • 38min
A defence of human uniqueness against AI encroachment, with Kenn Cukier
Despite the impressive recent progress in AI capabilities, there are reasons why AI may be incapable of possessing a full "general intelligence". And although AI will continue to transform the workplace, some important jobs will remain outside the reach of AI. In other words, the Economic Singularity may not happen, and AGI may be impossible.These are views defended by our guest in this episode, Kenneth Cukier, the Deputy Executive Editor of The Economist newspaper.For the past decade, Kenn was the host of its weekly tech podcast Babbage. He is co-author of the 2013 book “Big Data", a New York Times best-seller that has been translated into over 20 languages. He is a regular commentator in the media, and a popular keynote speaker, from TED to the World Economic Forum.Kenn recently stepped down as a board director of Chatham House and a fellow at Oxford's Saïd Business School. He is a member of the Council on Foreign Relations. His latest book is "Framers", on the power of mental models and the limits of AI.Follow-up reading:http://www.cukier.com/https://mediadirectory.economist.com/people/kenneth-cukier/https://www.metaculus.com/questions/3479/date-weakly-general-ai-is-publicly-known/Kurzweil's version of the Turing Test: https://longbets.org/1/Topics addressed in this episode include:*) Changing attitudes at The Economist about how to report on the prospects for AI*) The dual roles of scepticism regarding claims made for technology*) 'Calum's rule' about technology forecasts that omit timing*) Options for magazine coverage of possible developments more than 10 years into the future*) Some leaders within AI research, including Sam Altman of OpenAI, think AGI could happen within a decade*) Metaculus community aggregate forecasts for the arrival of different forms of AGI*) A theme for 2023: the increased 'emergence' of unexpected new capabilities within AI large language models - especially when these models are combined with other AI functionality*) Different views on the usefulness of the Turing Test - a test of human idiocy rather than machine intelligence?*) The benchmark of "human-level general intelligence" may become as anachronistic as the benchmark of "horsepower" for rockets*) The drawbacks of viewing the world through a left-brained hyper-rational "scientistic" perspective*) Two ways the ancient Greeks said we could find truth: logos and mythos*) People in 2023 finding "mythical, spiritual significance" in their ChatGPT conversations*) Appropriate and inappropriate applause for what GPTs can do*) Another horse analogy: could steam engines that lack horse-like legs really replace horses?*) The Ship of Theseus argument that consciousness could be transferred from biology to silicon*) The "life force" and its apparently magical, spiritual aspects*) The human superpower to imaginatively reframe mental models*) People previously thought humans had a unique superpower to create soul-moving music, but a musical version of the Turing Test changed minds*) Different levels of creativity: not just playing games well but inventing new games*) How many people will have paid jobs in the future?*) Two final arguments why key human abilities will remain unique*) The "pragmatic turn" in AI: duplicating without understanding*) The special value, not of information, but of the absence of information (emptiness, kenosis, the "clouDigital Disruption with Geoff Nielson Discover how technology is reshaping our lives and livelihoods.Listen on: Apple Podcasts Spotify

Apr 12, 2023 • 35min
Against pausing AI research, with Pedro Domingos
Should the pace of research into advanced artificial intelligence be slowed down, or perhaps even paused completely?Your answer to that question probably depends on your answers to a number of other questions. Is advanced artificial intelligence reaching the point where it could result in catastrophic damage? Is a slow down desirable, given that AI can also lead to lots of very positive outcomes, including tools to guard against the worst excesses of other applications of AI? And even if a slow down is desirable, is it practical?Our guest in this episode is Professor Pedro Domingos of the University of Washington. He is perhaps best known for his book "The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World".That book takes an approach to the future of AI that is significantly different from what you can read in many other books. It describes five different "tribes" of AI researchers, each with their own paradigms, and it suggests that true progress towards human-level general intelligence will depend on a unification of these different approaches. In other words, we won't reach AGI just by scaling up deep learning approaches, or even by adding in features from logical reasoning.Follow-up reading:https://homes.cs.washington.edu/~pedrod/https://www.amazon.co.uk/Master-Algorithm-Ultimate-Learning-Machine/dp/0241004543https://futureoflife.org/open-letter/pause-giant-ai-experiments/Topics addressed in this episode include:*) The five tribes of AI research - why there's a lot more to AI than deep learning*) Why unifying these five tribes may not be sufficient to reach human-level general intelligence*) The task of understanding an entire concept (e.g 'horse') from just seeing a single example*) A wide spread of estimates of the timescale to reach AGI*) Different views as to the true risks from advanced AI*) The case that risks arise from AI incompetence rather than from increased AI competence*) A different risk: that bad actors will gain dangerously more power from access to increasingly competent AI*) The case for using AI to prevent misuse of AI*) Yet another risk: that an AI trained against one objective function will nevertheless adopt goals diverging from that objective*) How AIs that operate beyond our understanding could still remain under human control*) How fully can evolution be trusted to produce outputs in line with a specified objective function?*) The example of humans taming wolves into dogs that pose no threat to us*) The counterexample of humans pursuing goals contrary to our in-built genetic drives*) Complications with multiple levels of selection pressures, e.g genes and memes working at cross purposes*) The “genie problem” (or “King Midas problem”) of choosing an objective function that is apparently attractive but actually dangerous*) Assessing the motivations of people who have signed the FLI (Future of Life Institute) letter advocating a pause on the development of larger AI language models*) Pros and cons of escalating a sense of urgency*) The two key questions of existential risk from AI: how much risk is acceptable, and what might that level of risk become in the near future?*) The need for a more rational discussion of the issues raised by increasingly competent AIsMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationDigital Disruption with Geoff Nielson Discover how technology is reshaping our lives and livelihoods.Listen on: Apple Podcasts Spotify

Apr 5, 2023 • 33min
Facing our Futures, with Nikolas Badminton
2023 is still young, but there's already a change in the attitudes of many business people regarding the future. Previously, businesses expressed occasional interest in possible disruptive scenarios, but their attention often quickly turned back to the apparently more pressing tasks of business-as-usual. But recent news of changes in AI capabilities, along with possible social transformations due to pandemics, geopolitics, and industrial unrest, is leading more and more business people to wonder: How can they become more effective in anticipating and managing potential significant changes in their business landscape?In this context, the new book by our guest in this episode, Nikolas Badminton, is particularly timely. It's called "Facing our Futures: How foresight, futures design and strategy creates prosperity and growth".Over the last few years, Nikolas has worked with over 300 organizations including Google, Microsoft, NASA, the United Nations, American Express, and Rolls Royce, and he advised Robert Downey Jr.’s team for the ‘Age of A.I.’ documentary series.Selected follow-up reading:https://nikolasbadminton.com/https://futurist.com/https://www.bloomsbury.com/uk/facing-our-futures-9781399400237/Topics in this conversation include:*) A personal journey to becoming a futurist - with some "hot water" along the way*) The "Dark Futures" project: "what might happen if we take the wrong path forward"*) The dangers of ignoring how bad things might become*) Are we heading toward "the end times"?*) Being in a constant state of collapse*) Human resilience, and how to strengthen it*) Futurists as "hope engineers"*) Pros and cons of the "anti-growth" or "de-growth" initiative*) The useful positive influence of "design fiction" (including futures that are "entirely imaginary")*) The risks of a "pay to play" abundance future*) The benefits of open medicine and open science*) Examples of decisions taken by corporations after futures exercises*) Tips for people interested in a career as a futurist*) Pros and cons of "pop futurists"*) The single biggest danger in our future?*) Evidence from Rene Rohrbeck and Menes Etingue Kum that companies who apply futures thinking significantly out-perform their competitors in profitability and growth*) The idea of an "apocalypse windfall" from climate change*) Some key messages from the book "Facing our Futures": recommended mindset changes*) Having the honesty and courage to face up to our mistakes*) What if... former UK Prime Minister David Cameron had conducted a futures study before embarking on the Brexit project?*) A multi-generational outlook on the future - learning from the IroquoisMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationDigital Disruption with Geoff Nielson Discover how technology is reshaping our lives and livelihoods.Listen on: Apple Podcasts Spotify

10 snips
Mar 29, 2023 • 33min
GPT-4 and the Two Singularities
In the last few weeks, the pace of change in AI has been faster than ever before. The changes aren't just announcements of future capabilities - announcements that could have been viewed, perhaps, as hype. The changes are new versions of AI systems that are available for users around the world to experiment with, directly, here and now. These systems are being released by multiple different companies, and also by open-source collaborations. And users of these systems are frequently expressing surprise: the systems are by no means perfect, but they regularly out-perform previous expectations, sometimes in astonishing ways.In this episode, Calum Chace and David Wood, the co-hosts of this podcast series, discuss the wider implications of these new AI systems. David asks Calum if he has changed any of his ideas about what he has called "the two singularities", namely the Economic Singularity and the Technological Singularity, as covered in a number of books he has written.Calum has been a full-time writer and speaker on the subject of AI since 2012. Earlier in his life, he studied philosophy, politics, and economics at Oxford University, and trained as a journalist at the BBC. He wrote a column in the Financial Times and nowadays is a regular contributor to Forbes magazine. In between, he held a number of roles in business, including leading a media practice at KPMG. In the last few days, he has been taking a close look at GPT-4.Selected follow-up reading:https://calumchace.com/the-economic-singularity/https://calumchace.com/surviving-ai-synopsis/Topics in this conversation include:*) Is the media excitement about GPT-4 and its predecessor ChatGPT overblown, or are these systems signs of truly important disruptions?*) How do these new AI systems compare with earlier AIs?*) The two "big bangs" in AI history*) How transformers work*) The difference between self-supervised learning and supervised learning*) The significance of OpenAI enabling general public access to ChatGPT*) Market competition between Microsoft Bing and Google Search*) Unwholesome replies by Microsoft Sydney and Google Bard - and the intended role of RLHF (Reinforcement Learning with Human Feedback)*) How basic reasoning seems to emerge (unexpectedly) from pattern recognition at sufficient scale*) Examples of how the jobs of knowledge workers are being changed by GPT-4*) What will happen to departments where each human knowledge workers has a tenfold productivity boost?*) From the job churns of the past to the Great Churn of the near future*) The forthcoming wave of automation is not only more general than past waves, but will also proceed at a much faster pace*) Improvements in the writing AI produces, such as book chapters*) Revisions of timelines for the Economic and Technological Singularity?*) It now seems that human intelligence is less hard to replicate than was previously thought*) The Technological Singularity might arrive before an Economic Singularity*) The liberating vision of people no longer needing to be wage slaves, and the threat of almost everyone living in poverty*) The insufficiency of UBI (Universal Basic Income) unless an economy of abundance is achieved (bringing the costs of goods and services down toward zero)*) Is the creation of AI now out of control, with a rush to release new versions?*) The infeasibility of the idea of AGI relinquishment*) OpenAI's recent acDigital Disruption with Geoff Nielson Discover how technology is reshaping our lives and livelihoods.Listen on: Apple Podcasts Spotify

Mar 22, 2023 • 38min
Creating Benevolent Decentralized AGI, with Ben Goertzel
Ben Goertzel is a cognitive scientist and artificial intelligence researcher. He is CEO and founder of SingularityNET, leader of the OpenCog Foundation, and chair of Humanity+.Ben is perhaps best-known for popularising the term 'artificial general intelligence', or AGI, a machine with all the cognitive abilities of an adult human. He thinks that the way to create this machine is to start with a baby-like AI, and raise it, as we raise children. We would do this either in VR, or in robot form. Hence he works with the robot-builder David Hanson to create robots like Sophia and Grace.Ben is a unique and engaging speaker, and gives frequent keynotes all round the world. Both his appearance and his views have been described as counter-cultural. In this episode, we hear about Ben's vision for the creation of benevolent decentralized AGI.Selected follow-up reading:https://singularitynet.io/http://goertzel.org/http://multiverseaccordingtoben.blogspot.com/Topics in this conversation include:*) Occasional hazards of humans and robots working together*) "The future is already here, it's just not wired together properly"*) Ben's definition of AGI*) Ways in which humans lack "general intelligence"*) Changes in society expected when AI reaches "human level"*) Is there "one key thing" which will enable the creation of AGI?*) Ben's OpenCog Hyperon project combines three approaches: neural pattern recognition and synthesis, rigorous symbolic reasoning, and evolutionary creativity*) Parallel combinations versus sequential combinations of AI capabilities: why the former is harder, but more likely to create AGI*) Three methods to improve the scalability of AI algorithms: mathematical innovations, efficient concurrent processing, and an AGI hardware board*) "We can reach the Singularity in ten years if we really, really try"*) ... but humanity has, so far, not "really tried" to apply sufficient resources to creating AGI*) Sam Altman: "If you talk about the upsides of what AGI could do for us, you sound like a crazy person"*) "The benefits of AGI will challenge our concept of 'what is a benefit'"*) Options for human life trajectories, if AGIs are well disposed towards humans*) We will be faced with the questions of "what do we want" and "what are our values"*) The burning issue is "what is the transition phase" to get to AGI*) Ben's disagreements with Nick Bostrom and Eliezer Yudkowsky*) Assessment of the approach taken by OpenAI to create AGI*) Different degrees of faith in big tech companies as a venue for hosting the breakthroughs in creating AGI*) Should OpenAI be renamed as "ClosedAI"?*) The SingularityNET initiative to create a decentralized, democratically controlled infrastructure for AGI*) The development of AGI should be "more like Linux or the Internet than Windows or the mobile phone ecosystem"*) Limitations of neural net systems in self-understanding*) Faith in big tech and capitalism vs. faith in humanity as a whole vs. faith in reward maximization as a paradigm for intelligence*) Open-ended intelligence vs. intelligence created by reward maximization*) A concern regarding Effective Altruism*) There's more to intelligence than pursuit of an overarching goal*) A broader view of evolution than drives to survive and to reproduce*) "What the fate of humanity depends on" - selecting the right approach to the creation of AGDigital Disruption with Geoff Nielson Discover how technology is reshaping our lives and livelihoods.Listen on: Apple Podcasts Spotify

Mar 15, 2023 • 37min
What the good future could look like, with Gerd Leonhard
At a time when many people find it depressingly easy to see how "bad futures" could arise, what is a credible narrative of a "good future"? That question is of central concern to our guest in this episode, Gerd Leonhard.Gerd is one of the most successful futurists on the international speaker circuit. He estimates that he has spoken to a combined audience of 2.5 million people in more than 50 countries.He left his home country of Germany in 1982 to go to the USA to study music. While he was in the US, he set up one of the first internet-based music businesses, and then he parlayed that into his current speaking career. His talks and videos are known for their engaging use of technology and design, and he prides himself on his rigorous use of research and data to back up his claims and insights.Selected follow-ups:https://www.futuristgerd.com/https://www.futuristgerd.com/sharing/thegoodfuturefilm/Topics in this conversation include:*) The need for a positive antidote to all the negative visions of the future that are often in people's minds*) People, planet, purpose, and prosperity - rather than an over-focus on profit and economic growth*) Anticipating stock markets that work differently, and with additional requirements before dividends can be paid*) A reason to be an optimist: not because we have less problems (we don't), but because we have more capacity to deal with these problems*) From "capitalism" to "progressive capitalism" (another name could be "social capitalism")*) Kevin Kelly's concept of "protopia" as a contrast to both utopia and dystopia*) Too much of a good thing can be... a bad thing*) How governments and the state interact with free markets*) Managers who try to prioritise people, planet, or purpose (rather than profits and dividends) are "whacked by the stock market"*) The example of the Montreal protocol regarding the hole in the ozone layer, when governments gave a strong direction to the chemical industry*) Some questions about people, planet, purpose, and prosperity are relatively straightforward, but others are much more contested*) Conflicting motivations within high tech firms regarding speed-to-market vs. safety*) Controlling the spread of potentially dangerous AI may be much harder than controlling the spread of nuclear weapons technology, especially as costs reduce for AI development and deployment*) Despite geopolitical tensions, different countries are already collaborating behind the scenes on matters of AGI safety*) How much "financial freedom" should the definition of a good future embrace?*) Universal Basic Income and "the Star Trek economy" as potential responses to the Economic Singularity*) Differing assessments of the role of transhumanism in the good future*) Risks when humans become overly dependent on technology*) Most modern humans can't make a fire from scratch: does that matter?*) The Carrington Event of 1859: the most intense geomagnetic storm in recorded history*) How views changed in the 19th century about giving anaesthetics to women to counter the (biblically mandated?) intense pains of childbirth*) Will views change in a similar way about the possibility of external wombs (ectogenesis)?*) Jamie Bartlett's concept of "the moral singularity" when humans lose the ability to take hard decisions*) Can AI provide useful advice about human-human relationships?*) Is everything truly impDigital Disruption with Geoff Nielson Discover how technology is reshaping our lives and livelihoods.Listen on: Apple Podcasts Spotify

Mar 8, 2023 • 36min
ChatGPT raises old and new concerns about AI, with Francesca Rossi
Our guest in this episode is Francesca Rossi. Francesca studied computer science at the University of Pisa in Italy, where she became a professor, before spending 20 years at the University of Padova. In 2015 she joined IBM's T.J. Watson Research Lab in New York, where she is now an IBM Fellow and also IBM's AI Ethics Global Leader.Francesca is a member of numerous international bodies concerned with the beneficial use of AI, including being a board member at the Partnership on AI, a Steering Committee member and designated expert at the Global Partnership on AI, a member of the scientific advisory board of the Future of Life Institute, and Chair of the international conference on Artificial Intelligence, Ethics, and Society which is being held in Montreal in August this year.From 2022 until 2024 she holds the prestigious role of the President of the AAAI, that is, the Association for the Advancement of Artificial Intelligence. The AAAI has recently held its annual conference, and in this episode, Francesca shares some reflections on what happened there.Selected follow-ups:https://researcher.watson.ibm.com/researcher/view.php?person=ibm-Francesca.Rossi2https://en.wikipedia.org/wiki/Francesca_Rossihttps://partnershiponai.org/https://gpai.ai/Topics in this conversation include:*) How a one-year sabbatical at the Harvard Radcliffe Institute changed the trajectory of Francesca's life*) New generative AI systems such as ChatGPT expand previous issues involving bias, privacy, copyright, and content moderation - because they are trained on very large data sets that have not been curated*) Large language models (LLMs) have been optimised, not for "factuality", but for creating language that is syntactically correct*) Compared to previous AIs, the new systems impact a wider range of occupations, and they also have major implications for education*) Are the "AI ethics" and "responsible AI" approaches that address the issues of existing AI systems also the best approaches for the "AI alignment" and "AI safety" issues raised by artificial general intelligence?*) Different ideas on how future LLMs could acquire mastery, not only over language, but also over logic, inference, and reasoning*) Options for combining classical AI techniques focussing on knowledge and reasoning, with the data-intensive approaches of LLMs*) How "foundation models" allow training to be split into two phases, with a shorter supervised phase customising the output from a prior longer unsupervised phase*) Even experts face the temptation to anthropomorphise the behaviour of LLMs*) On the other hand, unexpected capabilities have emerged within LLMs*) The interplay of "thinking fast" and "thinking slow" - adapting, for the context of AI, insights from Daniel Kahneman about human intelligence*) Cross-fertilisation of ideas from different communities at the recent AAAI conference*) An extension of that "bridge" theme to involve ideas from outside of AI itself, including the use of methods of physics to observe and interpret LLMs from the outside*) Prospects for interpretability, explainability, and transparency of AI - and implications for trust and cooperation between humans and AIs*) The roles played by different international bodies, such as PAI and GPAI*) Pros and cons of including China in the initial phase of GPAI*) Designing regulations to be future-proof, with parts that can change quickly*Digital Disruption with Geoff Nielson Discover how technology is reshaping our lives and livelihoods.Listen on: Apple Podcasts Spotify