

Chetan Puttagunta and Modest Proposal - Capital, Compute & AI Scaling - [Invest Like the Best, EP.399]
Key Takeaways
- As of late 2024, the AI industry is shifting from a pre-training compute approach to test-time compute
- Understanding the difference between pre-training and test-time compute: Pre-training occurs before testing and involves more complex, resource-intensive computation, whereas test-time compute is typically faster and focuses only on making inferences
- Moving from pre-training to inference-time is a powerful paradigm shift for the AI industry
- 1. It better aligns revenue generation and expenditures; this is beneficial for the industry at-large
- 2. Having to re-architect the computing network creates new opportunities and considerations related to power generation and grid design
- Test-time compute better aligns the compute and expenditures of the model, relative to pre-training; this is better for the hyperscalers from an efficiency perspective
- The plateau in pre-training has enabled small teams to catch up to the state-of-the-art models; the proliferation of open source models, specifically what Meta has done with Llama, has been an extraordinary force for AI scaling
- If the plateau in pre-training continues, small teams will be able to “jump to the frontier” of model training for a specific AI use case; this allows reduces competition amongst the hyperscalers
- It is likely for two of the Mag7 companies, such as Google and Meta, to give away an AI search product similar to ChatGPT for free
- OpenAI is “very serious about achieving AGI”; that is the company’s mission, and everything else the company does is in service of that
- Stability at the model layer will enable optimization at the various layers above it; when the industry is in “land-grab” mode, there is not any time to optimize!
- Over the long-term, technology is deflationary because it is a matter of optimization
- When technology unlocks, distribution also unlocks; this means that startups can now acquire customers that were previously too expensive to get
- “I imagine that we’ll be pretty close to or at AGI in 2025.” – Chetan Puttagunta
Read the full notes @ podcastnotes.org
My guests today are Chetan Puttagunta and Modest Proposal. Chetan is a General Partner at venture firm Benchmark, while Modest Proposal is an anonymous guest who manages a large pool of capital in the public markets. Both are good friends and frequent guests on the show, but this is the first time they have appeared together. And the timing couldn’t be better - we might be witnessing a pivotal shift in AI development as leading labs hit scaling limits and transition from pre-training to test-time compute. Together, we explore how this change could democratize AI development while reshaping the investment landscape across both public and private markets. Please enjoy this discussion with Chetan Puttagunta and Modest Proposal.
My guests today For the full show notes, transcript, and links to mentioned content, check out the episode page here.
-----
This episode is brought to you by Ramp. Ramp’s mission is to help companies manage their spend in a way that reduces expenses and frees up time for teams to work on more valuable projects. Ramp is the fastest growing FinTech company in history and it’s backed by more of my favorite past guests (at least 16 of them!) than probably any other company I’m aware of. It’s also notable that many best-in-class businesses use Ramp—companies like Airbnb, Anduril, and Shopify, as well as investors like Sequoia Capital and Vista Equity. They use Ramp to manage their spending, automate tedious financial processes, and reinvest saved dollars and hours into growth. At Colossus and Positive Sum, we use Ramp for exactly the same reason. Go to Ramp.com/invest to sign up for free and get a $250 welcome bonus.
–
This episode is brought to you by Alphasense. AlphaSense has completely transformed the research process with cutting-edge AI technology and a vast collection of top-tier, reliable business content. Imagine completing your research five to ten times faster with search that delivers the most relevant results, helping you make high-conviction decisions with confidence. AlphaSense provides access to over 300 million premium documents, including company filings, earnings reports, press releases, and more from public and private companies. Invest Like the Best listeners can get a free trial now at Alpha-Sense.com/Invest and experience firsthand how AlphaSense and Tegas help you make smarter decisions faster.
-----
Invest Like the Best is a property of Colossus, LLC. For more episodes of Invest Like the Best, visit joincolossus.com/episodes.
Follow us on Twitter: @patrick_oshag | @JoinColossus
Editing and post-production work for this episode was provided by The Podcast Consultant (https://thepodcastconsultant.com).
Show Notes:
(00:00:00) Welcome to Invest Like the Best
(00:05:30) Introduction to LLM Scaling Challenges
(00:07:25) Synthetic Data and Test Time Compute
(00:08:53) Implications of Test Time Compute
(00:11:19) Public Tech Companies and AI Investments
(00:16:58) Small Teams and Open Source Models
(00:29:02) Strategic Positioning of Major AI Players
(00:35:49) AGI and Future Prospects
(00:46:50) AI Application Layer and Investment Opportunities
(00:54:18) The Paradigm Shift in AI Reasoning
(00:55:34) Investing in AI-Powered Solutions
(00:58:46) Economic Impacts of AI Advancements
(01:00:19) The Future of AI and Model Stability
(01:02:52) Private Market Valuations and Compute Costs
(01:05:05) Infrastructure and Utilization in AI
(01:12:50) The Role of Hyperscalers and GPUs
(01:18:02) The Evolution of AI Applications
(01:27:56) Philosophical Questions on AGI and ASI
(01:34:31) The Importance of Innovation Hubs