The Economist has embraced the potential of AI and aims to understand its implications while also managing its downsides and making it work for human values.
AI may surpass human performance in specific tasks, but it cannot fully replicate uniquely human cognitive capabilities such as mental models, imagination, creativity, and intuition.
Deep dives
Economist's Transformation in AI Coverage
The Economist, known for its initial skepticism towards AI, has evolved its coverage on the subject. The shift was not deliberate, but rather an editorial ethos embracing the potential of technology. The publication now aims to understand AI and its implications, while also looking into managing its downsides and making it work for human values. This change reflects a broader natural skepticism in journalism, promoting an appreciation for progress and potential. The Economist aims to describe AI trends, help readers navigate them, and encourage a sense of agency in shaping the future.
The AI Debate: Looking Beyond 10 Years
While The Economist has improved its coverage of AI, it has been criticized for not exploring the implications of AGI and superintelligence, as well as the economic singularity. The magazine's focus on shorter-term trends has left these topics largely unaddressed. The host argues that even if these events are decades away, it is crucial to consider their potential impact. He suggests that a longer-term focus should be incorporated into the publication, discussing how society can navigate these advancements and better prepare for the possibilities they bring.
Understanding AGI vs. Human Intelligence
The host and guest discuss the concept of artificial general intelligence (AGI) and whether machines can replicate the full range of human cognitive abilities. While the guest believes that AI can surpass human performance in specific tasks, he maintains that certain cognitive capabilities and the human capacity for mental models and imagination cannot be fully replicated in AI systems. He argues that these uniquely human qualities, such as reframing, creativity, and intuition, will remain incomparable to machine intelligence, despite AI's progress and superhuman capabilities in narrow domains.
The Nature of Consciousness and AI's Limitations
The conversation delves into the limits of AI's understanding of consciousness and the human experience. The guest argues that human consciousness cannot be replicated or truly understood by AI, as it encompasses not only information but the absence of it and a sense of mystery. He suggests that AI's focus on information and mimicry falls short in capturing the essence of human experiences, which are shaped by embodiment, emotion, and our capacity to transcend ourselves. The guest expresses skepticism towards the idea that AI can truly replicate all aspects of human intelligence and cautions against idolizing machines.
Despite the impressive recent progress in AI capabilities, there are reasons why AI may be incapable of possessing a full "general intelligence". And although AI will continue to transform the workplace, some important jobs will remain outside the reach of AI. In other words, the Economic Singularity may not happen, and AGI may be impossible.
These are views defended by our guest in this episode, Kenneth Cukier, the Deputy Executive Editor of The Economist newspaper.
For the past decade, Kenn was the host of its weekly tech podcast Babbage. He is co-author of the 2013 book “Big Data", a New York Times best-seller that has been translated into over 20 languages. He is a regular commentator in the media, and a popular keynote speaker, from TED to the World Economic Forum.
Kenn recently stepped down as a board director of Chatham House and a fellow at Oxford's Saïd Business School. He is a member of the Council on Foreign Relations. His latest book is "Framers", on the power of mental models and the limits of AI.
Follow-up reading: http://www.cukier.com/ https://mediadirectory.economist.com/people/kenneth-cukier/ https://www.metaculus.com/questions/3479/date-weakly-general-ai-is-publicly-known/ Kurzweil's version of the Turing Test: https://longbets.org/1/
Topics addressed in this episode include:
*) Changing attitudes at The Economist about how to report on the prospects for AI *) The dual roles of scepticism regarding claims made for technology *) 'Calum's rule' about technology forecasts that omit timing *) Options for magazine coverage of possible developments more than 10 years into the future *) Some leaders within AI research, including Sam Altman of OpenAI, think AGI could happen within a decade *) Metaculus community aggregate forecasts for the arrival of different forms of AGI *) A theme for 2023: the increased 'emergence' of unexpected new capabilities within AI large language models - especially when these models are combined with other AI functionality *) Different views on the usefulness of the Turing Test - a test of human idiocy rather than machine intelligence? *) The benchmark of "human-level general intelligence" may become as anachronistic as the benchmark of "horsepower" for rockets *) The drawbacks of viewing the world through a left-brained hyper-rational "scientistic" perspective *) Two ways the ancient Greeks said we could find truth: logos and mythos *) People in 2023 finding "mythical, spiritual significance" in their ChatGPT conversations *) Appropriate and inappropriate applause for what GPTs can do *) Another horse analogy: could steam engines that lack horse-like legs really replace horses? *) The Ship of Theseus argument that consciousness could be transferred from biology to silicon *) The "life force" and its apparently magical, spiritual aspects *) The human superpower to imaginatively reframe mental models *) People previously thought humans had a unique superpower to create soul-moving music, but a musical version of the Turing Test changed minds *) Different levels of creativity: not just playing games well but inventing new games *) How many people will have paid jobs in the future? *) Two final arguments why key human abilities will remain unique *) The "pragmatic turn" in AI: duplicating without understanding *) The special value, not of information, but of the absence of information (emptiness, kenosis, the "cloud of unknowing") *) The temptations of mimicry and idolatry
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Get the Snipd podcast app
Unlock the knowledge in podcasts with the podcast player of the future.
AI-powered podcast player
Listen to all your favourite podcasts with AI-powered features
Discover highlights
Listen to the best highlights from the podcasts you love and dive into the full episode
Save any moment
Hear something you like? Tap your headphones to save it with AI-generated key takeaways
Share & Export
Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more
AI-powered podcast player
Listen to all your favourite podcasts with AI-powered features
Discover highlights
Listen to the best highlights from the podcasts you love and dive into the full episode