The Sora architecture involves chunking up images or videos into patches to train the model to operate on these patches instead of full images. These patches act as atomic ingredients of the image that are mapped into latent space for processing. This approach marks a shift from traditional image-level network architectures to diffusion transformers. The comparison between different models like Sora, Valley free, and stable diffusion highlights the challenge of diminishing returns for companies specialized in this area. As models advance in handling text well, the focus now shifts to finer details like drawing hands accurately. While stable diffusion three is still in the testing phase, the competition in image processing appears to be intensifying with newer architectures like Sora and Gemini entering the scene.
Our 157th episode with a summary and discussion of last week's big AI news!
Check out our sponsor, the SuperDataScience podcast. You can listen to SDS across all major podcasting platforms (e.g., Spotify, Apple Podcasts, Google Podcasts) plus there’s a video version on YouTube.
Bonus plug: also check out this new book by Stanford AI expert, bestselling author, and Last Week in AI supporter Jerry Kaplan! Generative Artificial Intelligence: What Everyone Needs to Know
Read out our text newsletter and comment on the podcast at https://lastweekin.ai/
Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai
Timestamps + links:
- (00:00:00) Intro / Banter
- Tools & Apps
- Applications & Business
- Projects & Open Source
- Research & Advancements
- Policy & Safety
- Synthetic Media & Art
- Fun!