20VC: Is More Compute the Answer to Model Performance | Why OpenAI Abandons Products, The Biggest Opportunities They Have Not Taken & Analysing Their Race for AGI | What Companies, AI Labs and Startups Get Wrong About AI with Ethan Mollick
Ethan Mollick, an Associate Professor at the Wharton School and Co-Director of the Generative AI Lab, dives into the pressing issues within AI. He discusses how adding more compute might not yield better models, and why OpenAI may be mishandling consumer product development. Ethan critiques the startup landscape and the challenges of AI adoption, emphasizing the need for user-centered approaches. He also explores the importance of local connections in Venture Capital and the evolving role of AI in education as we approach AGI.
01:08:50
forum Ask episode
web_stories AI Snips
view_agenda Chapters
menu_book Books
auto_awesome Transcript
info_circle Episode notes
insights INSIGHT
AI's Jagged Performance
Current AI models excel in some areas but lag in others, creating a jagged performance profile.
This jaggedness prevents AI from fully replacing human work, as it cannot consistently perform all tasks at a high level.
question_answer ANECDOTE
Steam Train Analogy
Ethan Mollick uses the steam train analogy to explain how new technologies spread.
Skilled artisans adapted the steam engine's power to various machines, driving the Industrial Revolution.
volunteer_activism ADVICE
Focus on User Needs
Silicon Valley focuses on scaling for AGI and neglects user needs, leading to poorly designed products.
Companies should prioritize user-friendly interfaces and practical applications over scaling for theoretical future scenarios.
Get the Snipd Podcast app to discover more snips from this episode
In *Co-Intelligence*, Ethan Mollick explores the profound impact of AI on business and education. He urges readers to engage with AI as co-workers, co-teachers, and coaches, using numerous real-time examples to illustrate its potential. Mollick argues that AI should augment human intelligence rather than replace it, and he provides practical advice on how to harness AI's power to create a better human future. The book addresses the transformative potential of AI, its ethical concerns, and the importance of mastering the skill of working with smart machines[1][2][4].
Ethan Mollick is the Co-Director of the Generative AI Lab at Wharton, which builds prototypes and conducts research to discover how AI can help humans thrive while mitigating risks. Ethan is also an Associate Professor at the Wharton School of the University of Pennsylvania, where he studies and teaches innovation and entrepreneurship, and also examines the effects of artificial intelligence on work and education. His papers have been published in top journals and his book on AI, Co-Intelligence, is a New York Times bestseller.
In Today's Episode with Ethan Mollick We Discuss:
1. Models: Is More Compute the Answer:
How has Ethan changed his mind on whether we have a lot of room to run in adding more compute to increase model performance?
What will happen with models in the next 12 months that no one expects?
Why will open models immediately be used by bad actors, what should happen as a result?
Data, algorithms, compute, what is the biggest bottleneck and how will this change with time?
2. OpenAI: The Missed Opportunity, Product Roadmap and AGI:
Why does Ethan believe that OpenAI is completely out of touch with creating products that consumers want to use?
Which product did OpenAI shelve that will prove to be a massive mistake?
How does Ethan analyse OpenAI's pursuit of AGI?
Why did Ethan think Brad, COO @ OpenAI's heuristic of "startups should be threatened if they are not excited by a 100x improvement in model" is total BS?
3. VCs, Startups and AI Labs: What the World Does Not Understand:
What do Big AI labs not understand about big companies?
What are the biggest mistakes companies are making when implementing AI?
Why are startups not being ambitious enough with AI today?
What are the single biggest ways consumers can and should be using AI today?