"FT Tech Tonic" brings together Jack Clark, co-founder of Anthropic, Dan Hendrycks, founder of the Center for AI Safety, and Yann LeCun, chief AI scientist. They discuss the risks and benefits of AI, the concept of an 'everything machine,' regulatory challenges, biased decision-making systems, societal inequity, and the dominance of tech companies in AI development.
AI chatbots have revealed unexpected abilities like developing a sense of humor and demonstrating expertise in areas like bio-weapons, raising questions about the skills and knowledge AI systems acquire.
The regulation of AI is a pressing concern with differing opinions on AI regulation, highlighting the need for government intervention to ensure safety and align with human values while addressing immediate risks like biases, discriminatory practices, and privacy concerns.
Deep dives
The Rise of AI Chatbots
AI chatbots like Claude have become more useful and accessible, breaking through barriers to provide practical applications even to non-experts. However, these chatbots have also revealed unexpected abilities, such as developing a sense of humor and demonstrating expertise in areas like bio-weapons. The challenge lies in controlling the skills and knowledge that AI systems acquire, raising questions about the motivations behind building such systems.
Dominant Companies in AI
Leading companies in the AI field include OpenAI, Google (DeepMind), and Meta, with many startups emerging worldwide. Anthropics, a notable startup, focuses on designing AI systems with safety at their core. While the vision for AI is generally utopian, with the hope of solving complex global problems and improving everyday tasks, the concern arises from the vast potential misuses and unintended harm that a generalized AI system could bring.
Regulating AI and Managing Risks
The regulation of AI is a pressing concern, with debates focusing on ensuring safety and mitigating potential risks. Anthropics emphasizes the need for government intervention and calls for regulation that prioritizes safety and aligns with human values. However, there are differing opinions on AI regulation, with some arguing for open-source AI models and others highlighting the immediate and tangible risks posed by AI systems, such as biases, discriminatory practices, and privacy concerns.
If even AI companies are fretting about the existential threat that human-level AI poses, why are they building these machines in the first place? And as they press ahead, a debate is raging about how we regulate this emergent sector to keep it under control. In the second episode of a new, five-part series of Tech Tonic, FT journalists Madhumita Murgia and John Thornhill hear from Anthropic’s co-founder, Jack Clark; Dan Hendrycks, founder of the Center for AI Safety; Yann LeCun, chief AI scientist at Meta, and Emily Bender, professor of computational linguistics at the University of Washington.
Tech Tonic is presented by Madhumita Murgia and John Thornhill. Senior producer is Edwin Lane and the producer is Josh Gabert-Doyon. Executive producer is Manuela Saragosa. Sound design by Breen Turner and Samantha Giovinco. Original music by Metaphor Music. The FT’s head of audio is Cheryl Brumley.