The chapter discusses a research paper from the Allen Institute for AI and academic partners about a new auto-aggressive, multi-modal model. The model is capable of understanding and generating images, text, audio, and actions with a shared representation for all these modalities. It explores the concept of true multi-modality by allowing one system to take in any kind of input and produce any kind of output. The chapter also addresses the issue of test set contamination in language models and evaluates language model agents on autonomous replication and adaptation tasks.
Our 149th episode with a summary and discussion of last week's big AI news!
Check out our sponsor, the SuperDataScience podcast. You can listen to SDS across all major podcasting platforms (e.g., Spotify, Apple Podcasts, Google Podcasts) plus there’s a video version on YouTube.
Read out our text newsletter and comment on the podcast at https://lastweekin.ai/
Email us your questions and feedback at contact@lastweekin.ai
Some recommended resources for keeping up with AI research:
Timestamps + links:
- (00:00:00) Intro / Banter
- (00:08:13) Reflections on 2023
- Tools & Apps
- Applications & Business
- Research & Advancements
- Policy & Safety
- (01:23:40) Outro