This chapter explores the release of Meta's Llama 3.1 model, detailing its impressive 400 billion parameters and the complex training processes behind it. It discusses the competitive landscape of open-source AI, recent model releases, and the impact of these advancements on AI capabilities and monetization strategies. Additionally, the chapter highlights the challenges of predicting real-world performance and the implications of emergent capabilities on future AI development.
Our 176th episode with a summary and discussion of last week's big AI news!
NOTE: apologies for this episode coming out about a week late, things got in the way of editing it...
With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)
Read out our text newsletter and comment on the podcast at https://lastweekin.ai/
If you would like to become a sponsor for the newsletter, podcast, or both, please fill out this form.
Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai
- (00:00:00) Intro Song
- (00:00:34) Intro Banter
- Tools & Apps
- Projects & Open Source
- Applications & Business
- Research & Advancements
- Policy & Safety
- Synthetic Media & Art
- (01:23:03) Outro
- (01:23:58) AI Song