Venture capitalists weigh in on the early stages of the AI era, pondering investment strategies and the quest for sustainable business models. Discussion highlights the challenges posed by rising model training costs and the potential impacts of open-source AI. A thought-provoking analogy likens AI to 'unlimited interns,' prompting exploration of practical applications across industries. The cautious adoption trajectory of generative AI within enterprises raises questions about the future role of AI technologies. Will they become essential infrastructure or comprehensive platforms?
41:16
forum Ask episode
web_stories AI Snips
view_agenda Chapters
auto_awesome Transcript
info_circle Episode notes
insights INSIGHT
VC Investment Strategy
VCs invest heavily upfront, expecting market expansion and winner-take-all outcomes.
They acknowledge high failure rates, viewing some investments as calculated gambles.
insights INSIGHT
AI's Marginal Costs
Generative AI has reintroduced marginal costs since the mainframe era, unlike typical software.
Each incremental user or advancement adds significant cost, impacting pricing.
insights INSIGHT
AI Moats and Value
Current AI models lack strong moats beyond capital, favoring well-funded players.
Model quality vs. price improves, but service price increases raise value concerns.
Get the Snipd Podcast app to discover more snips from this episode
How are the largest VCs viewing the early stages of the AI Era, from the perspective of investment, technology moats, economics, early adoption and future use-cases.
It’s not clear that there is a technology moat; but maybe a capital moat
Model training costs are expected to rise 5x to 10x - worse economics??
Lots of VC investment and vendor 2nd-order investments
LLM costs are creating marginal cost of software (been since the mainframe)
Model quality vs. price is improving, but price of the services (e.g. ChatGPT-Pro) is increasing - how much extra value is being delivered?
How will open source impact AI?
“If anything in life is certain, semiconductors are cyclical, commodity tech goes to marginal cost, and every new tech produces a bubble.”
Today’s GenAI question - is it accurate and useful? How can we tell, and how can it improve (or does it need to)?
Start with a simple concept - AI gives us unlimited interns - how can you extrapolate that? How would this have been extrapolated for the original internet (create content, translate language, write code, etc.)
Use cases are still not easy to see beyond Chatbots (and variants), Coding Assistants
Consulting revenue from GenAI is bigger than technology - and still most/many projects still in trials.
Technology can take a long time to adopt - Cloud still only has 30% of workloads (15yrs old)
66% of CEO’s don’t expect their first GenAI app in production until sometime in 2025, 50% at least 2H of 2025.
[Shadow AI] SaaS AI will accelerate adoption, if it follows Cloud pattern - external forces are more motivated to attack business “change” than internal teams
[Build vs. Ecosystem] Do the LLM vendors become the application vendors? Where does the LLM start and stop (infra, platform, API, apps, etc.)
[Learning from the customers] Do the LLM vendors use their knowledge advantage to build the apps?
GenAI Apps Categories - Make something better, Replace something, Just do the thing
“AI is just whatever is wrong/broken now” - How well does AI understand “broken”
Will people be the biggest problem in AI progress?
[Decoupling] Looks at global markets for Internet today - ecommerce/retail, food delivery, advertising, media, autonomous driving,