Google's AI launches started with Google Bart in February 2023, which had a rocky start by providing incorrect information about the James Webb space telescope. The next launch, Gemini, sparked controversy for its refusal to generate images of white people. The AI overviews are Google's bets on the future of AI in search, aiming to enhance user experience but raising concerns of locking users into Google's ecosystem and compromising referral traffic for other websites. Recently, the AI overviews have sparked new concerns by providing dangerous advice like eating rocks, highlighting the potential risks and ethical dilemmas of AI technology.
This week, Google found itself in more turmoil, this time over its new AI Overviews feature and a trove of leaked internal documents. Then Josh Batson, a researcher at the A.I. startup Anthropic, joins us to explain how an experiment that made the chatbot Claude obsessed with the Golden Gate Bridge represents a major breakthrough in understanding how large language models work. And finally, we take a look at recent developments in A.I. safety, after Casey’s early access to OpenAI’s new souped-up voice assistant was taken away for safety reasons.
Guests:
- Josh Batson, research scientist at Anthropic
Additional Reading:
We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.