

How DeepSeek Showed That Silicon Valley Is Washed
47 snips Feb 3, 2025
The podcast dives into how a new generative AI model threatens giants like OpenAI and Anthropic. It paints a bleak picture of innovation's decline in Silicon Valley, spotlighting the financial struggles of established tech firms. The rise of DeepSeek, with its efficient open-source model, challenges traditional companies and raises questions about the sustainability of AI investments. Additionally, the discussion touches on the intricacies of NFL contracts and SoftBank's shaky tech bets, revealing the broader implications for the future of technology.
AI Snips
Chapters
Transcript
Episode notes
LLM Unsustainability
- Large language models (LLMs) like those from OpenAI and Anthropic are unprofitable and unsustainable.
- Their transformer-based architecture has peaked due to limited training data and plateauing capabilities.
The Myth of Bigger Models
- The prevailing belief was that larger models with more data would unlock new AI capabilities and justify high costs.
- Lower silicon prices were expected to eventually reduce costs, but that's a flawed argument.
DeepSeek's Disruption
- DeepSeek, a Chinese company, created a model comparable to OpenAI's but far cheaper.
- This highlights the American tech industry's hubris and lack of genuine innovation.