OpenAI has announced a 16,000 context version of GPT 3.5 turbo. That's four times longer and can accommodate about 20 pages of text in a single request. On top of that, they're also reducing their pricing for embeddings models by 75%. This is a new way to more reliably connect GPT's capabilities with external tools and APIs.
OpenAI has announced a set of API updates including lower prices, a larger 16k context window, and something they're calling function calling. On today's episode, NLW explains why function calling in particular is such a big deal. Before that on the Brief, updates from Adobe and Meta as well as a new superchip and HuggingFace partnership for AMD.
The AI Breakdown helps you understand the most important news and discussions in AI.
Subscribe to The AI Breakdown newsletter: https://theaibreakdown.beehiiv.com/subscribe
Subscribe to The AI Breakdown on YouTube: https://www.youtube.com/@TheAIBreakdown
Join the community: bit.ly/aibreakdown
Learn more: http://breakdown.network/