When the first video diffusion models started emerging, they were little more than just “moving pictures” - still frames extended a few seconds in either direction in time. There was a ton of excitement about OpenAI’s Sora on release through 2024, but so far only Sora-lite has been widely released. Meanwhile, other good videogen models like Genmo Mochi, Pika, MiniMax T2V, Tencent Hunyuan Video, and Kuaishou’s Kling have emerged, but the reigning king this year seems to be Google’s Veo 3, which for the first time has added native audio generation into their model capabilities, eliminating the need for a whole class of lipsynching tooling and SFX editing.
The rise of Veo 3 unlocks a whole new category of AI Video creators that many of our audience may not have been exposed to, but is undeniably effective and important particularly in the “kids” and “brainrot” segments of the global consumer internet platforms like Tiktok, YouTube and Instagram.
By far the best documentarians of these trends for laypeople are Olivia and Justine Moore, both partners at a16z, who not only collate the best examples from all over the web, but dabble in video creation themselves to put theory into practice. We’ve been thinking of dabbling in AI brainrot on a secondary channel for Latent Space, so we wanted to get the braindump from the Moore twins on how to make a Latent Space Brainrot channel. Jump on in!
Chapters
- 00:00:00 Introductions & Guest Welcome
- 00:00:49 The Rise of Generative Media
- 00:02:24 AI Video Trends: Italian Brain Rot & Viral Characters
- 00:05:00 Following Trends & Creating AI Content
- 00:07:17 Hands-On with AI Video Creation
- 00:18:36 Monetization & Business of AI Content
- 00:23:34 Platforms, Models, and the Creator Stack
- 00:37:22 Native Content vs. Clipping & Going Viral
- 00:41:52 Prompt Theory & Meta-Trends in AI Creativity
- 00:47:42 Professional, Commercial, and Platform-Specific AI Video
- 00:48:57 Wrap-Up & Final Thoughts