This chapter delves into the latest features of OpenAI's GPT 4.0 model, specifically focusing on the voice mode and its native multimodal support. The hosts discuss the nuances of the new voice feature, showcase a demo clip highlighting its conversational abilities, and compare it to previous AI assistants. They also explore the advancements in AI processing audio, video, and text simultaneously, emphasizing improved understanding and emotional response capabilities.
This week, OpenAI unveiled GPT-4o, its newest A.I. model. It has an uncannily emotive voice that everybody is talking about. Then, we break down the biggest announcements from Google IO, including the launch of A.I. overviews, a major change to search that threatens the way the entire web functions. And finally, Kevin and Casey discuss the weirdest headlines from the week in another round of HatGPT.
Additional Reading:
We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.