Lex Fridman Podcast

#490 – State of AI in 2026: LLMs, Coding, Scaling Laws, China, Agents, GPUs, AGI

964 snips
Feb 1, 2026
Sebastian Raschka, hands-on ML educator and author of practical LLM guides, and Nathan Lambert, post-training lead at AI2 and RLHF specialist, discuss China vs US competition, which chatbots excel at coding and long context, open vs closed model tradeoffs, architectural tweaks like MOE, where progress really comes from (systems, data, post-training), RL with verifiable rewards, scaling laws, tool use and agents, and timelines toward AGI.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
INSIGHT

Ideas Move Faster Than Exclusive Tech

  • Ideas flow freely across labs; the real bottleneck is budget, hardware, and organizational execution.
  • Sebastian Raschka argues no single company will hold exclusive technological advantage in 2026.
INSIGHT

Culture Trumps Raw Model Hype

  • Product success depends on culture, operational discipline, and focused bets, not only model quality.
  • Nathan Lambert credits Anthropic's product-focused culture for recent momentum in coding tools.
ADVICE

Match Model Mode To Task Urgency

  • Use fast, lower-cost models for quick tasks and switch to 'thinking' or pro models for research or high-stakes work.
  • Choose the model to fit latency, cost, and required reliability rather than always using the largest model.
Get the Snipd Podcast app to discover more snips from this episode
Get the app