
Digital Disruption with Geoff Nielson AGI Is Here: AI Legend Peter Norvig on Why it Doesn't Matter Anymore
Are we chasing the wrong goal with Artificial General Intelligence, and missing the breakthroughs that matter now
On this episode of Digital Disruption, we’re joined by former research director at Google and AI legend, Peter Norvig.
Peter is an American computer scientist and a Distinguished Education Fellow at the Stanford Institute for Human-Centered Artificial Intelligence (HAI). He is also a researcher at Google, where he previously served as Director of Research and led the company’s core search algorithms group. Before joining Google, Norvig headed NASA Ames Research Center’s Computational Sciences Division, where he served as NASA’s senior computer scientist and received the NASA Exceptional Achievement Award in 2001.He is best known as the co-author, alongside Stuart J. Russell, of Artificial Intelligence: A Modern Approach — the world’s most widely used textbook in the field of artificial intelligence.
Peter sits down with Geoff to separate facts from fiction about where AI is really headed. He explains why the hype around Artificial General Intelligence (AGI) misses the point, how today’s models are already “general,” and what truly matters most: making AI safer, more reliable, and human-centered. He discusses the rapid evolution of generative models, the risks of misinformation, AI safety, open-source regulation, and the balance between democratizing AI and containing powerful systems. This conversation explores the impact of AI on jobs, education, cybersecurity, and global inequality, and how organizations can adapt, not by chasing hype, but by aligning AI to business and societal goals. If you want to understand where AI actually stands, beyond the headlines, this is the conversation you need to hear.
In this episode:
00:00 Intro
01:00 How AI evolved since Artificial Intelligence: A Modern Approach
03:00 Is AGI already here? Norvig’s take on general intelligence
06:00 The surprising progress in large language models
08:00 Evolution vs. revolution
10:00 Making AI safer and more reliable
12:00 Lessons from social media and unintended consequences
15:00 The real AI risks: misinformation and misuse
18:00 Inside Stanford’s Human-Centered AI Institute
20:00 Regulation, policy, and the role of government
22:00 Why AI may need an Underwriters Laboratory moment
24:00 Will there be one “winner” in the AI race?
26:00 The open-source dilemma: freedom vs. safety
28:00 Can AI improve cybersecurity more than it harms it?
30:00 “Teach Yourself Programming in 10 Years” in the AI age
33:00 The speed paradox: learning vs. automation
36:00 How AI might (finally) change productivity
38:00 Global economics, China, and leapfrog technologies
42:00 The job market: faster disruption and inequality
45:00 The social safety net and future of full-time work
48:00 Winners, losers, and redistributing value in the AI era
50:00 How CEOs should really approach AI strategy
52:00 Why hiring a “PhD in AI” isn’t the answer
54:00 The democratization of AI for small businesses
56:00 The future of IT and enterprise functions
57:00 Advice for staying relevant as a technologist
59:00 A realistic optimism for AI’s future
Connect with Peter:
LinkedIn: https://www.linkedin.com/in/pnorvig/
Visit our website: https://www.infotech.com/?utm_source=youtube&utm_medium=social&utm_campaign=podcastFollow us on YouTube: https://www.youtube.com/@InfoTechRG
