AI technologies like chatbots are advancing to tailor responses to individual users based on their political views and preferences. This customization could help reduce pressure on AI to accurately predict user preferences. However, as AI becomes more integrated into core services like search engines, users may hold companies accountable for offensive or incorrect responses. Users tend to view search engine results in a different light than personalized responses from AI, leading to potential challenges in managing user expectations and accountability.
Warning: This episode contains strong language.
Google removed the ability to generate images of people from its Gemini chatbot. We talk about why, and about the brewing culture war over artificial intelligence. Then, did Kara Swisher start “Hard Fork”? We clear up some podcast drama and ask about her new book, “Burn Book.” And finally, the legal expert Daphne Keller tells us how the U.S. Supreme Court might rule on the most important First Amendment cases of the internet era, and what Star Trek and soy boys have to do with it.
Today’s guests:
- Kara Swisher, tech journalist and Casey Newton’s former landlord
- Daphne Keller, director of the program on platform regulation at Stanford University’s Cyber Policy Center
Additional Reading:
We want to hear from you. Email us at hardfork@nytimes.com.
Find “Hard Fork” on YouTube and TikTok.