Cognitive scientist and AI skeptic Gary Marcus joins the discussion to explore whether large language model scaling has hit a ceiling. He shares insights on diminishing returns in AI effectiveness and critiques the reliance on GPU scaling. The conversation touches on data privacy issues, ethical considerations in AI development, and the risks associated with both open-source and proprietary models. Marcus emphasizes the need for transparency and a more nuanced understanding of AI's future trajectory.
54:22
forum Ask episode
web_stories AI Snips
view_agenda Chapters
menu_book Books
auto_awesome Transcript
info_circle Episode notes
insights INSIGHT
Diminishing Returns from AI Scaling
The scaling laws of AI models suggested predictable improvement with more data and compute but are now showing diminishing returns.
The industry is quietly acknowledging that making models bigger no longer yields the past exponential performance gains.
insights INSIGHT
Scaling Up Models Yields Small Gains
Recent huge increases in model size do not correspond to proportionate improvements in performance.
The era of large jumps in AI model quality from just scaling up appears to be over; gains now are incremental and costly.
insights INSIGHT
Test Time Compute Offers Narrow Gains
Adding more compute at test time (reasoning steps) helps improve performance only in narrow, closed domains.
This method doesn't yield broad reasoning improvements and the models often just mimic human reasoning patterns, not deeply understand them.
Get the Snipd Podcast app to discover more snips from this episode
Gary Marcus and Ernest Davis provide a lucid assessment of the current science in AI, explaining what today’s AI can and cannot do. They argue that current AI systems, based on deep learning, are narrow and brittle, and that achieving true artificial general intelligence requires moving beyond statistical analysis and large data sets. The authors suggest that by incorporating knowledge-driven approaches and common sense, we can build AI systems that are reliable and trustworthy in various aspects of our lives, such as homes, cars, and medical offices.
Thinking, Fast and Slow
Daniel Kahneman
In this book, Daniel Kahneman takes readers on a tour of the mind, explaining how the two systems of thought shape our judgments and decisions. System 1 is fast, automatic, and emotional, while System 2 is slower, effortful, and logical. Kahneman discusses the impact of cognitive biases, the difficulties of predicting future happiness, and the effects of overconfidence on corporate strategies. He offers practical insights into how to guard against mental glitches and how to benefit from slow thinking in both personal and business life. The book also explores the distinction between the 'experiencing self' and the 'remembering self' and their roles in our perception of happiness.
Privacy is Power
Reclaiming Democracy in the Digital Age
Carissa Véliz
In 'Privacy is Power', Carissa Véliz exposes how the data economy erodes our privacy, highlighting the ways in which tech companies and governments harvest and exploit personal data. The book argues that this erosion of privacy undermines our autonomy and democracy, and it provides practical solutions for policymakers and ordinary citizens to take back control. Véliz emphasizes that privacy is both a personal and collective issue, and that protecting it is crucial for maintaining free choices and democratic societies.
Gary Marcus is a cognitive scientist, author, and longtime AI skeptic. Marcus joins Big Technology to discuss whether large‑language‑model scaling is running into a wall. Tune in to hear a frank debate on the limits of “just add GPUs" and what that means for the next wave of AI. We also cover data‑privacy fallout from ad‑driven assistants, open‑source bio‑risk fears, and the quest for interpretability. Hit play for a reality check on AI’s future — and the insight you need to follow where the industry heads next.
---
Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice.