AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
The podcast hosts discuss the varying levels of excitement and interest in AI models like Sora. Some people are fascinated by the potential of these models, while others are indifferent or don't fully understand their impact. The hosts highlight that it will take time for everyone to grasp the potential of AI and its long-term effects.
The hosts discuss the exponential scaling potential of AI models like Sora and how it can lead to better and more advanced models in the future. They mention the importance of compute power and how increased resources can enable significant improvements in AI technologies. They also acknowledge that while these advancements may not have an immediate impact on people's daily lives, they will shape the future of AI.
The hosts explore the competition and market dynamics in the field of AI. They highlight the advantage that large companies like OpenAI have in terms of resources, data, and infrastructure. They discuss the implications for smaller startups and the need for strategic decision-making, specialization, and finding unique applications to thrive in the market.
The hosts touch upon the potential risks associated with AI development and the need for responsible and safe deployment. They emphasize the importance of regulation, safety measures, and alignment to prevent any negative impacts or misuse of AI. While they acknowledge the rapid progress in AI, they stress the continued need for careful evaluation and progress within ethical boundaries.
The podcast episode discusses the development of AI models and the potential need for regulation. The speaker emphasizes the concern that as AI models become more powerful and accessible, there is a growing need for effective regulation to prevent misuse. They highlight the difference between individuals creating advanced models in their basements versus large companies, and argue that regulations should be proportional to the impact and capabilities of the models.
The podcast delves into the gradual increase in IQ performance of AI models. The speaker suggests that the advancements from GPT-2 to GPT-3 to GPT-4 showcase a slow increase in intelligence rather than drastic leaps. They discuss how each model's ability to pass different tests indicates incremental improvements and speculate on the potential IQ points difference between GPT-3 and GPT-4. The speaker also acknowledges that the rate of improvement is uncertain and dependent on resources and funding.
The podcast explores the diversity of perspectives and beliefs regarding AI and its future impact. The speakers discuss how people's various worldviews shape their opinions on AI safety and regulation. They highlight the importance of understanding others' perspectives and assumptions about the trajectory of AI development. The conversation draws parallels to diverse viewpoints during the COVID-19 pandemic and emphasizes the need for open conversations to bridge the gap of understanding and alignment.
Emil is the co-founder of palette.fm (colorizing B&W pictures with generative AI) and was previously working in deep learning for Google Arts & Culture.
We were talking about Sora on a daily basis, so I decided to record our conversation, and then proceeded to confront him about AI risk.
Patreon: https://www.patreon.com/theinsideview
Sora: https://openai.com/sora
Palette: https://palette.fm/
Emil: https://twitter.com/EmilWallner
OUTLINE
(00:00) this is not a podcast
(01:50) living in parallel universes
(04:27) palette.fm - colorizing b&w pictures
(06:35) Emil's first reaction to sora, latent diffusion, world models
(09:06) simulating minecraft, midjourney's 3d modeling goal
(11:04) generating camera angles, game engines, metadata, ground-truth
(13:44) doesn't remove all artifacts, surprising limitations: both smart and dumb
(15:42) did sora make emil depressed about his job
(18:44) OpenAI is starting to have a monopoly
(20:20) hardware costs, commoditized models, distribution
(23:34) challenges, applications building on features, distribution
(29:18) different reactions to sora, depressed builders, automation
(31:00) sora was 2y early, applications don't need object permanence
(33:38) Emil is pro open source and acceleration
(34:43) Emil is not scared of recursive self-improvement
(36:18) self-improvement already exists in current models
(38:02) emil is bearish on recursive self-improvement without diminishing returns now
(42:43) are models getting more and more general? is there any substantial multimodal transfer?
(44:37) should we start building guardrails before seeing substantial evidence of human-level reasoning?
(48:35) progressively releasing models, making them more aligned, AI helping with alignment research
(51:49) should AI be regulated at all? should self-improving AI be regulated?
(53:49) would a faster emil be able to takeover the world?
(56:48) is competition a race to bottom or does it lead to better products?
(58:23) slow vs. fast takeoffs, measuring progress in iq points
(01:01:12) flipping the interview
(01:01:36) the "we're living in parallel universes" monologue
(01:07:14) priors are unscientific, looking at current problems vs. speculating
(01:09:18) AI risk & Covid, appropriate resources for risk management
(01:11:23) pushing technology forward accelerates races and increases risk
(01:15:50) sora was surprising, things that seem far are sometimes around the corner
(01:17:30) hard to tell what's not possible in 5 years that would be possible in 20 years
(01:18:06) evidence for a break on AI progress: sleeper agents, sora, bing
(01:21:58) multimodality transfer, leveraging video data, leveraging simulators, data quality
(01:25:14) is sora is about length, consistency, or just "scale is all you need" for video?
(01:26:25) highjacking language models to say nice things is the new SEO
(01:27:01) what would michael do as CEO of OpenAI
(01:29:45) on the difficulty of budgeting between capabilities and alignment research
(01:31:11) ai race: the descriptive pessimistive view vs. the moral view, evidence of cooperation
(01:34:00) making progress on alignment without accelerating races, the foundational model business, competition
(01:37:30) what emil changed his mind about: AI could enable exploits that spread quickly, misuse
(01:40:59) michael's update as a friend
(01:41:51) emil's experience as a patreon
Listen to all your favourite podcasts with AI-powered features
Listen to the best highlights from the podcasts you love and dive into the full episode
Hear something you like? Tap your headphones to save it with AI-generated key takeaways
Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more
Listen to all your favourite podcasts with AI-powered features
Listen to the best highlights from the podcasts you love and dive into the full episode