The Prof G Pod with Scott Galloway

Regulating AI, Future-Proof Jobs, and Who’s Accountable When It Fails — ft. Greg Shove

238 snips
Oct 6, 2025
Greg Shove, CEO of Section, delves into the critical landscape of AI regulation and its impact on the workforce. He discusses the need for safety protocols and highlights the responsibility of companies in ensuring AI's safe deployment. Shove identifies which jobs are most at risk from AI, advocating for skills like critical thinking and storytelling as essential for future-proofing careers. He also emphasizes the importance of human accountability in AI decision-making, making a compelling case for responsible AI adoption in high-stakes environments.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
ADVICE

Prioritize Safety-First Regulation

  • Push for safety-focused AI regulation and require safety teams at frontier model developers.
  • Prefer state-level action now and support companies that prioritize safety with your wallet.
INSIGHT

Regulation Patchwork Is Growing

  • Regulatory progress is uneven: EU and China have binding rules while the U.S. lacks federal AI law.
  • Voluntary safety testing like NIST agreements exist but lack legal force and broad coverage.
ADVICE

Use Purchasing Power To Signal Safety

  • Vote with your wallet by choosing AI providers that invest in safety and avoiding those that don't.
  • Encourage your company to refuse paying for unsafe AI offerings like Meta or XAI.
Get the Snipd Podcast app to discover more snips from this episode
Get the app