London Futurists cover image

London Futurists

Latest episodes

undefined
Jun 21, 2023 • 33min

Catastrophe and consent

In this episode, co-hosts Calum and David continue their reflections on what they have both learned from their interactions with guests on this podcast over the last few months. Where have their ideas changed? And where are they still sticking to their guns?The previous episode started to look at two of what Calum calls the 4 Cs of superintelligence: Cease and Control. In this episode, under the headings of Catastrophe and Consent, the discussion widens to look at what might be the very bad outcomes and also the very good outcomes, from the emergence of AI superintelligence.Topics addressed in this episode include:*) A 'zombie' argument that corporations are superintelligences - and what that suggests about the possibility of human control over a superintelligence*) The existential threat of the entire human species being wiped out*) The vulnerabilities of our shared infrastructure*) An AGI may pursue goals even without it being conscious or having agency*) The risks of accidental and/or coincidental catastrophe*) A single technical fault caused the failure of automated passport checking throughout the UK*) The example of automated control of the Boeing 737 Max causing the deaths of everyone aboard two flights - in Indonesia and in Ethiopia*) The example from 1983 of Stanislav Petrov using his human judgement regarding an automated alert of apparently incoming nuclear missiles*) Reasons why an AGI might decide to eliminate humans*) The serious risk of a growing public panic - and potential mishandling of it by self-interested partisan political leaders*) Why "Consent" is a better name than "Celebration"*) Reasons why an AGI might consent to help humanity flourish, solving all our existential problems*) Two models for humans merging with an AI superintelligence - to seek "Control", and as a consequence of "Consent"*) Enhanced human intelligence could play a role in avoiding a surge of panic*) Reflections on "The Artilect War" by Hugo de Garis: cosmists vs. terrans*) Reasons for supporting "team human" (or "team posthuman") as opposed to an AGI that might replace us*) Reflections on "Diaspora" by Greg Egan: three overlapping branches of future humans*) Is collaboration a self-evident virtue?*) Will an AGI consider humans to be endlessly fascinating? Or regard our culture and history as shallow and uninspiring?*) The inscrutability of AGI motivation*) A reason to consider "Consent" as the most likely outcome*) A fifth 'C' word, as discussed by Max Tegmark*) A reason to keep working on a moonshot solution for "Control"*) Practical steps to reduce the risk of public panicMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationReal Talk About MarketingAn Acxiom podcast where we discuss marketing made better, bringing you real...Listen on: Apple Podcasts   Spotify
undefined
Jun 16, 2023 • 33min

The 4 Cs of Superintelligence

The 4 Cs of Superintelligence is a framework that casts fresh light on the vexing question of possible outcomes of humanity's interactions with an emerging superintelligent AI. The 4 Cs are Cease, Control, Catastrophe, and Consent. In this episode, the show's co-hosts, Calum Chace and David Wood, debate the pros and cons of the first two of these Cs, and lay the groundwork for a follow-up discussion of the pros and cons of the remaining two.Topics addressed in this episode include:*) Reasons why superintelligence might never be created*) Timelines for the arrival of superintelligence have been compressed*) Does the unpredictability of superintelligence mean we shouldn't try to consider its arrival in advance?*) Two "big bangs" have caused dramatic progress in AI; what might the next such breakthrough bring?*) The flaws in the "Level zero futurist" position*) Two analogies contrasted: overcrowding on Mars , and travelling to Mars without knowing what we'll breathe when we'll get there*) A startling illustration of the dramatic power of exponential growth*) A concern for short-term risk is by no means a reason to pay less attention to longer-term risks*) Why the "Cease" option is looking more credible nowadays than it did a few years ago*) Might "Cease" become a "Plan B" option?*) Examples of political dictators who turned away from acquiring or using various highly risky weapons*) Challenges facing a "Turing Police" who monitor for dangerous AI developments*) If a superintelligence has agency (volition), it seems that "Control" is impossible*) Ideas for designing superintelligence without agency or volition*) Complications with emergent sub-goals (convergent instrumental goals)*) A badly configured superintelligent coffee fetcher*) Bad actors may add agency to a superintelligence, thinking it will boost its performance*) The possibility of changing social incentives to reduce the dangers of people becoming bad actors*) What's particularly hard about both "Cease" and "Control" is that they would need to remain in place forever*) Human civilisations contain many diametrically opposed goals*) Going beyond the statement of "Life, liberty, and the pursuit of happiness" to a starting point for aligning AI with human values?*) A cliff-hanger endingThe survey "Key open questions about the transition to AGI" can be found at https://transpolitica.org/projects/key-open-questions-about-the-transition-to-agi/Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationReal Talk About MarketingAn Acxiom podcast where we discuss marketing made better, bringing you real...Listen on: Apple Podcasts   Spotify
undefined
Jun 8, 2023 • 47min

GPT-4 transforming education, with Donald Clark

The launch of GPT-4 on 14th March has provoked concerns and searching questions, and nowhere more so than in the education sector. Earlier this month, the share price of US edutech company Chegg halved when its CEO admitted that GPT technology was a threat to its business model. Looking ahead, GPT models seem to put flesh on the bones of the idea that all students could one day have a personal tutor as effective as Aristotle, who was Alexander the Great’s personal tutor. When that happens, students should leave school and university far, far better educated than we did.Donald Clark is the ideal person to discuss this with. He founded Epic Group in 1983, and made it the UK’s largest provider of bespoke online education services before selling it in 2005. He is now the CEO of an AI learning company called WildFire, and an investor in and Board member of several other education technology businesses. In 2020 he published a book called Artificial Intelligence for Learning.Selected follow-ups:https://donaldclarkplanb.blogspot.com/https://www.ted.com/talks/sal_khan_how_ai_could_save_not_destroy_educationhttps://www.gatesnotes.com/The-Age-of-AI-Has-Begunhttps://www.amazon.co.uk/Case-against-Education-System-Waste/dp/0691196451/https://www.amazon.co.uk/Head-Hand-Heart-Intelligence-Over-Rewarded/dp/1982128461/Topics addressed in this episode include:*) "Education is a bit of a slow learner"*) Why GPT-4 has unprecedented potential to transform education*) The possibility of an online universal teacher*) Traditional education sometimes fails to follow best pedagogical practice*) Accelerating "time to competence" via personalised tuition*) Calum's experience learning maths*) How Khan Academy and DuoLingo are partnering with GPT-4*) The significance of the large range of languages covered by ChatGPT*) The recent essay on "The Age of AI" by Bill Gates*) Students learning social skills from each other*) An imbalanced societal focus on educating and valuing "head" rather than "heart" or "hand"*) "The case against education" by Bryan Caplan*) Evidence of wide usage of ChatGPT by students of all ages*) Three gaps between GPT-4 and AGI, and how they are being bridged by including GPT-4 in "ensembles"*) GPT-4 has a better theory of physics than GPT 3.5*) Encouraging a generative AI to learn about a worldview via its own sensory input, rather than directly feeding a worldview into it*) Pros and cons of "human exceptionalism"*) How GPT-4 is upending our ideas on the relation between language and intelligence*) Generative AI, the "C skills", and the set of jobs left for humans to do*) Custer's last stand?*) Three camps regarding progress toward AGI*) Investors' reactions to Italy banning ChatGPT (subsequently reversed)*) Different views on GDPR and European legislation*) Further thoughts on implications of GPT-4 for the education industry*) Shocking statistics on declining enrolment numbers in US universities*) Beyond exclusivity: "AReal Talk About MarketingAn Acxiom podcast where we discuss marketing made better, bringing you real...Listen on: Apple Podcasts   Spotify Digital Disruption with Geoff Nielson Discover how technology is reshaping our lives and livelihoods.Listen on: Apple Podcasts   Spotify
undefined
May 31, 2023 • 32min

GPT-4 and the EU’s AI Act, with John Higgins

The European Commission and Parliament were busily debating the Artificial Intelligence Act when GPT-4 launched on 14 March. As people realised that GPT technology was a game-changer, they called for the Act to be reconsidered.Famously, the EU contains no tech giants, so cutting edge AI is mostly developed in the US and China. But the EU is more than happy to act as the world’s most pro-active regulator of digital technologies, including AI. The 2016 General Data Protection Regulation (or GDPR) seeks to regulate data protection and privacy, and its impacts remain controversial today.The AI Act was proposed in 2021. It does not confer rights on individuals, but instead regulates the providers of artificial intelligence systems. It is a risk-based approach.John Higgins joins us in this episode to discuss the AI Act. John is the Chair of the Global Digital Foundation, a think tank, and last year he was president of BCS (British Computer Society), the professional body for the IT industry. He has had a long and distinguished career helping to shape digital policy in the UK and the EU.Follow-up reading:https://www.globaldigitalfoundation.org/https://artificialintelligenceact.eu/Topics addressed in this episode include:*) How different is generative AI from the productivity tools that have come before?*) Two approaches to regulation compared: a "Franco-German" approach and an "Anglo-American" approach*) The precautionary principle, for when a regulatory framework needs to be established in order to provide market confidence*) The EU's preference for regulating applications rather than regulating technology*) The types of application that matter most - when there is an impact on human rights and/or safety*) Regulations in the Act compared to the principles that good developers will in any case be following*) Problems with lack of information about the data sets used to train LLMs (Large Language Models)*) Enabling the flow, between the different "providers" within the AI value chain, of information about compliance*) Two potential alternatives to how the EU aims to regulate AI*) How an Act passes through EU legislation*) Conflicting assessments of the GDPR: a sledgehammer to crack a nut?*) Is it conceivable that LLMs will be banned in Europe?*) Why are there no tech giants in Europe? Does it matter?*) Other metrics for measuring the success of AI within Europe*) Strengths and weaknesses of the EU single market*) Reasons why the BCS opposed the moratorium proposed by the FLI: impracticality, asymmetry, benefits held back*) Some counterarguments in favour of the FLI position*) Projects undertaken by the Global Digital Foundation*) The role of AI in addressing (as well as exacerbating) hate speech*) Growing concerns over populism, polarisation, and post-truth*) The need for improved transparency and improved understandingMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationReal Talk About MarketingAn Acxiom podcast where we discuss marketing made better, bringing you real...Listen on: Apple Podcasts   Spotify
undefined
11 snips
May 24, 2023 • 34min

Longevity, the 56 trillion dollar opportunity, with Andrew Scott

Technological changes have economic impact. It's not just that technology allows more goods and services to be produced more efficiently and at greater scale. It's also that these changes disrupt previous assumptions about the conduct of human lives, human relationships, and the methods to save money to buy goods and services. A society in which people expect to die around the age of 100, or even older, needs to make different plans than a society in which people expect to die in their 70s.Some politicians, in unguarded moments, have even occasionally expressed a desire for retired people to "hurry up and die", on account of the ballooning costs of pension payments and healthcare costs for the elderly. These politicians worry about the negative consequences of longer lives. In their viewpoint, longer lives would be bad for the economy.But not everyone thinks that way. Indeed, a distinguished professor of economics, from the London Business School, Andrew J Scott, has studied a variety of different future scenarios about the economic consequences of longer lives. He is our guest in this episode.In addition to his role at the London Business School, Andrew is a Research Fellow at the Centre for Economic Policy Research and a consulting scholar at Stanford University’s Center on Longevity.His research has been widely published in leading journals in economics and health. His book, "The 100-Year Life", has been published in 15 languages, is an Amazon bestseller and was runner up in both the FT/McKinsey and Japanese Business Book of the Year Awards.Andrew has been an advisor on policy to a range of governments. He is currently on the advisory board of the UK’s Office for Budget Responsibility, the Cabinet Office Honours Committee (Science and Technology), co-founder of The Longevity Forum, a member of the National Academy of Medicine’s International Commission on Health Longevity, and the WEF council on Healthy Ageing and Longevity.Follow-up reading:https://profandrewjscott.com/https://www.nature.com/articles/s43587-021-00080-0Topics addressed in this episode include:*) Why Andrew wrote the book "The 100-Year Life" (co-authored with Lynda Gratton)*) Shortcomings of the conventional narrative of "the aging society"*) The profound significance of aging being malleable*) Joint research with David Sinclair (Harvard) and Martin Ellison (Oxford): Economic modelling of the future of healthspan and lifespan*) Four different scenarios: Struldbruggs, Dorian Gray, Peter Pan, and Wolverine*) The multi-trillion dollar economic value of everyone in the USA gaining one additional year of life in good health*) The first and second longevity revolutions*) The virtuous circle around aging research*) Options for lives that are significantly longer even than 100 years*) The ill-preparedness of our social structures for extensions in longevity - and, especially, for the attainment of longevity escape velocity*) The possibility of rapid changes in society's expectations*) The three-dimensional longevity dividend*) Developments in Singapore and the UAE*) Two important political initiatives: supporting the return to the workforce of people who are aged over 50, and paying greater attention to national statistics on expected healthspan*) Themes from Andrew's forthcoming new book "Evergreen"*) Why 57 isn't the new 40: it's the new 57*) Making a friend of your future selfMusic: Spike ProteReal Talk About MarketingAn Acxiom podcast where we discuss marketing made better, bringing you real...Listen on: Apple Podcasts   Spotify
undefined
May 17, 2023 • 36min

The key workforce skills for 2026, with Mike Howells

One of the questions audiences frequently used to ask futurists was, which careers are most likely to be future-proof? However, that question has changed in recent years. It's now more widely understood that every career is subject to disruption by technological and social trends. No occupation is immune to change. So the question has switched, away from possible future-proof careers, to the skills that are most likely to be useful in these fast-changing circumstances. For example, should everyone be learning to code, or deepen their knowledge of STEM - that is, Science, Technology, Engineering, and Maths? Or should there be more focus on so-called human skills or soft skills?Who better to answer that question than our guest in this episode, Mike Howells? Mike is the President of the Workforce Skills Division at Pearson, the leading learning company.The perennial debate about when and how advanced AI will cause widespread disruption in education has been given extra impetus by the launch of ChatGPT last November, and GPT-4 in March. Pearson, a venerable British company which has gone through various incarnations, is one of the companies at the sharp end of this debate about the changing role of technology in education. The share price of several of these companies suffered a temporary setback recently, due to a perception that GPT technology would replace many of their services. However, Pearson and its peers have rebutted these claims, and the stock has largely recovered.Indeed, with what could be viewed as considerable prescience, Pearson carried out a major piece of research before ChatGPT was launched, to identify which skills employers are prioritising for their new hires - new employees who will be in their stride in 2026 - three years from now.Follow-up reading:https://www.pearson.com/https://plc.pearson.com/en-GB/insights/pearson-skills-outlook-powerskillsTopics addressed in this episode include:*) Some lessons from Mike's own career trajectory*) How Pearson used AI in their survey of key workforce skills*) The growing importance - and growing value - of human skills*) The top 5 "power skills" that employers are seeking today*) The top 5 "power skills" that are projected to be most in-demand by 2026 - and which are in need of greatest improvement and investment*) Given that there are no university courses in these skill areas, how can people gain proficiency in them?*) Three ways of inferring evidence of someone's proficiency in these skill areas*) How the threat of automation has moved from blue collar jobs to white collar jobs*) People are used to taking data-driven decisions in many areas of their lives - e.g. which restaurants to visit or which holidays to book - but the data about the effect of various educational courses is surprisingly thin*) The increasing need for data-driven retraining*) Ways in which the retraining experience can be improved by AI and VR/AR/XR*) The attraction of digital assistants that can provide personalised tuition, especially as costs drop*) School-age children often already use their skills with existing technology to augment and personalise their learning*) Complications with privacy, security, consent, and measuring efficacy*) "It's not about what you've done; it's about what you can do"*) A closer look at "personal learning and mastery" and "cultural and social intelligence"Music: Spike Protein, by Koi Discovery, available under CC0 1.0 PReal Talk About MarketingAn Acxiom podcast where we discuss marketing made better, bringing you real...Listen on: Apple Podcasts   Spotify
undefined
May 10, 2023 • 44min

How to use GPT-4 yourself, with Ted Lappas

The last few episodes of our podcast have explored what GPT (generative pre-trained transformer) technology is and how it works, and also the call for a pause in the development of advanced AI. In this latest episode, Ted Lappas, a data scientist and academic, helps us to take a pragmatic turn - to understand what GPT technology can do for each of us individually.Ted is Assistant Professor at Athens University of Economics and Business, and he also works at Satalia, which was London's largest independent AI consultancy before it was acquired last year by the media giant WPP.Follow-up reading:https://satalia.com/https://www.linkedin.com/in/theodoros-lappas-82771451/Topics addressed in this episode include:*) The "GPT paradox": If GPT-4 is so good, why aren't more people using it to boost their effectiveness in their workplace?*) Concerns in some companies that data entered into GPTs will leak out and assist their competitors*) Uses of GPTs to create or manipulate text, and to help developers to understand new code*) GPTs as "brains" that lack the "limbs" that would make them truly useful*) GPT capabilities are being augmented via plug-ins that access sites like Expedia, Instacart, or Zapier*) Agent-based systems such as AutoGPT and AgentGPT that utilise GPTs to break down tasks into steps and then carry out these steps*) Comparison with the boost given to Apple iPhone adoption by the launch, one year later, of the iOS App Store*) Ted's use of GPT-4 in his role as a meta-reviewer for papers submitted to an academic conference - with Ted becoming an orchestrator more than a writer*) The learning curve is easier for vanilla GPTs than for agent systems that use GPTs*) GPTs are currently more suited to low-end writing than to high-end writing, but are expected to move up the value chain*) Ways to configure a GPT so that it can reproduce the quality level or textual style of a specific writer*) Calum's use of GPT-4 in his side-project as a travel writer*) Ways to stop GPTs inventing false anecdotes*) Some users of GPTs will lose all faith in them due to just a single hallucination*) Teaching GPTs to say "I don't know" or to state their level of confidence about claims they make*) Creating an embedding space search engine*) The case for gaining a working knowledge of the programming language Python*) The growth of technology-explainer videos on TikTok and Instagram*) "Explain this to me like I'm ten years old"*) The way to learn more about GPTs is to use them in a meaningful project*) Learning about GPTs such as DALL-E or Midjourney that generate not text but images*) Uses of GPTs for inpainting - blending new features into an image*) The advantages of open source tools, such as those available on Hugging Face*) Images will be largely solved in 2023; 2024 will be the year for video*) An appeal to "dive in, the sooner the better"Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationReal Talk About MarketingAn Acxiom podcast where we discuss marketing made better, bringing you real...Listen on: Apple Podcasts   Spotify Digital Disruption with Geoff Nielson Discover how technology is reshaping our lives and livelihoods.Listen on: Apple Podcasts   Spotify
undefined
May 3, 2023 • 34min

GPT: To ban or not to ban, that is the question

On March 14th, OpenAI launched GPT-4 , which took the world by surprise and storm. Almost everybody, including people within the AI community, was stunned by its capabilities. A week later, the Future of Life Institute (FLI) published an open letter calling on the world’s AI labs to pause the development of larger versions of GPT (generative pre-trained transformer) models until their safety can be ensured.Recent episodes of this podcast have presented arguments for and against this call for a moratorium. Jaan Tallin, one of the co-founders of FLI, made the case in favour. Pedro Domingos, an eminent AI researcher, and Kenn Cukier, a senior editor at The Economist, made variants of the case against. In this episode, co-hosts Calum Chace and David Wood highlight some key implications and give our own opinions. Expect some friendly disagreements along the way.Follow-up reading:https://futureoflife.org/open-letter/pause-giant-ai-experiments/https://www.metaculus.com/questions/3479/date-weakly-general-ai-is-publicly-known/Topics addressed in this episode include:*) Definitions of Artificial General Intelligence (AGI)*) Many analysts knowledgeable about AI have recently brought forward their estimates of when AGI will become a reality*) The case that AGI poses an existential risk to humanity*) The continued survival of the second smartest species on the planet depends entirely on the actions of the actual smartest species*) One species can cause another to become extinct, without that outcome being intended or planned*) Four different ways in which advanced AI could have terrible consequences for humanity: bugs in the implementation; the implementation being hacked (or jail broken); bugs in the design; and the design being hacked by emergent new motivations*) Near future AIs that still fall short of being AGI could have effects which, whilst not themselves existential, would plunge society into such a state of dysfunction and distraction that we are unable to prevent subsequent AGI-induced disaster*) Calum's "4 C's" categorisation of possible outcomes regarding AGI existential risks: Cease, Control, Catastrophe, and Consent*) 'Consent' means a superintelligence decides that we humans are fun, enjoyable, interesting, worthwhile, or simply unobjectionable, and consents to let us carry on as we are, or to help us, or to allow us to merge with it*) The 'Control' option arguably splits into "control while AI capabilities continue to proceed at full speed" and "control with the help of a temporary pause in the development of AI capabilities"*) Growing public support for stopping AI development - driven by a sense of outrage that the future of humanity is seemingly being decided by a small number of AI lab executives*) A comparison with how the 1983 film "The Day After" triggered a dramatic change in public opinion regarding the nuclear weapons arms race*) How much practical value could there be in a six-month pause? Or will the six-months be extended into an indefinite ban?*) Areas where there could be at least some progress: methods to validate the output of giant AI models, and choices of initial configurations that would make the 'Consent' scenario more likely*) Designs that might avoid the emergence of agency (convergent instrumental goals) within AI models as they acquire more intelligence*) Why 'Consent' might be the most likely outcome*) The longer a ban remains in place, the larger the risks of bad actors Real Talk About MarketingAn Acxiom podcast where we discuss marketing made better, bringing you real...Listen on: Apple Podcasts   Spotify
undefined
9 snips
Apr 26, 2023 • 30min

The AI suicide race, with Jaan Tallinn

The race to create advanced AI is becoming a suicide race. That's part of the thinking behind the open letter from the Future of Life Institute which "calls on all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4".In this episode, our guest, Jaan Tallinn, explains why he sees this pause as a particularly important initiative.In the 1990s and 20-noughts, Jaan led much of the software engineering for the file-sharing application Kazaa and the online communications tool Skype. He is also known as one of the earliest investors in DeepMind, before they were acquired by Google.More recently, Jaan has been a prominent advocate for study of existential risks, including the risks from artificial superintelligence. He helped set up the Centre for the Study of Existential Risk (CSER) in 2012 and the Future of Life Institute (FLI) in 2014.Follow-up reading:https://futureoflife.org/open-letter/pause-giant-ai-experiments/https://www.cser.ac.uk/https://en.wikipedia.org/wiki/Jaan_TallinnTopics addressed in this episode include:*) The differences between CSER and FLI*) Do the probabilities for the occurrence of different existential risks vary by orders of magnitude?*) The principle that "arguments screen authority"*) The possibility that GPT-6 will be built, not by humans, but by GPT-5*) Growing public concern, all over the world, that the fate of all humanity is, in effect, being decided by the actions of just a small number of people in AI labs*) Two reasons why FLI recently changed its approach to AI risk*) The AI safety conference in 2015 in Puerto Rico was initially viewed as a massive success, but it has had little lasting impact*) Uncertainty about a potential cataclysmic event doesn't entitle people to conclude it won't happen any time soon*) The argument that LLMs (Large Language Models) are an "off ramp" rather than being on the road to AGI*) Why the duration of 6 months was selected for the proposed pause*) The "What about China?" objection to the pause*) Potential concrete steps that could take place during the pause*) The FLI document "Policymaking in the pause"*) The article by Luke Muehlhauser of Open Philanthropy, "12 tentative ideas for US AI policy"*) The "summon and tame" way of thinking about the creation of LLMs - and the risk that minds summoned in this way won't be able to be tamed*) Scenarios in which the pause might be ignored by various entities, such as authoritarian regimes, organised crime, rogue corporations, and extraordinary individuals such as Elon Musk and John Carmack*) A meta-principle for deciding which types of AI research should be paused*) 100 million dollar projects become even harder when they are illegal*) The case for requiring the pre-registration of largescale mind-summoning experiments*) A possible 10^25 limit on the number of FLOPs (Floating Point Operations) an AI model can spend*) The reactions by AI lab leaders to the widescale public response to GPT-4 and to the pause letter*) Even Sundar Pichai, CEO of Google/Alphabet, has called for government intervention regarding AI*) The hardware overhang complication with the pause*) Not letting "the perfect" be "the enemy of the good"*) Elon Musk's involvement with FLI and with the pause letter*) "Humanity now has cancer"Music: Spike Protein, by Koi Discovery, available under CC0 1.Real Talk About MarketingAn Acxiom podcast where we discuss marketing made better, bringing you real...Listen on: Apple Podcasts   Spotify
undefined
15 snips
Apr 19, 2023 • 39min

A defence of human uniqueness against AI encroachment, with Kenn Cukier

Despite the impressive recent progress in AI capabilities, there are reasons why AI may be incapable of possessing a full "general intelligence". And although AI will continue to transform the workplace, some important jobs will remain outside the reach of AI. In other words, the Economic Singularity may not happen, and AGI may be impossible.These are views defended by our guest in this episode, Kenneth Cukier, the Deputy Executive Editor of The Economist newspaper.For the past decade, Kenn was the host of its weekly tech podcast Babbage. He is co-author of the 2013 book “Big Data", a New York Times best-seller that has been translated into over 20 languages. He is a regular commentator in the media, and a popular keynote speaker, from TED to the World Economic Forum.Kenn recently stepped down as a board director of Chatham House and a fellow at Oxford's Saïd Business School. He is a member of the Council on Foreign Relations. His latest book is "Framers", on the power of mental models and the limits of AI.Follow-up reading:http://www.cukier.com/https://mediadirectory.economist.com/people/kenneth-cukier/https://www.metaculus.com/questions/3479/date-weakly-general-ai-is-publicly-known/Kurzweil's version of the Turing Test: https://longbets.org/1/Topics addressed in this episode include:*) Changing attitudes at The Economist about how to report on the prospects for AI*) The dual roles of scepticism regarding claims made for technology*) 'Calum's rule' about technology forecasts that omit timing*) Options for magazine coverage of possible developments more than 10 years into the future*) Some leaders within AI research, including Sam Altman of OpenAI, think AGI could happen within a decade*) Metaculus community aggregate forecasts for the arrival of different forms of AGI*) A theme for 2023: the increased 'emergence' of unexpected new capabilities within AI large language models - especially when these models are combined with other AI functionality*) Different views on the usefulness of the Turing Test - a test of human idiocy rather than machine intelligence?*) The benchmark of "human-level general intelligence" may become as anachronistic as the benchmark of "horsepower" for rockets*) The drawbacks of viewing the world through a left-brained hyper-rational "scientistic" perspective*) Two ways the ancient Greeks said we could find truth: logos and mythos*) People in 2023 finding "mythical, spiritual significance" in their ChatGPT conversations*) Appropriate and inappropriate applause for what GPTs can do*) Another horse analogy: could steam engines that lack horse-like legs really replace horses?*) The Ship of Theseus argument that consciousness could be transferred from biology to silicon*) The "life force" and its apparently magical, spiritual aspects*) The human superpower to imaginatively reframe mental models*) People previously thought humans had a unique superpower to create soul-moving music, but a musical version of the Turing Test changed mReal Talk About MarketingAn Acxiom podcast where we discuss marketing made better, bringing you real...Listen on: Apple Podcasts   Spotify Digital Disruption with Geoff Nielson Discover how technology is reshaping our lives and livelihoods.Listen on: Apple Podcasts   Spotify

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app