LessWrong (Curated & Popular)

“I am worried about near-term non-LLM AI developments” by testingthewaters

Aug 1, 2025
The discussion highlights urgent risks from AI advancements beyond large language models, suggesting existing safety research may miss critical threats. Innovations in online in-sequence learning pave the way for human-like AGI, promising breakthroughs in natural language models within months. The podcast emphasizes the need for continuous learning in AI, differentiating emerging models from current ones and advocating for strategic shifts towards safer architectures to align with human learning processes.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Limits of LLM Safety Research

  • Most LLM-focused AI safety research won't mitigate existential or civilization-scale risks.
  • Parallel AI research on online learning could more quickly produce advanced AGI with continuous learning capabilities.
INSIGHT

Offline Training vs Human Learning

  • Current AI models train offline with separated pre-training and deployment phases without continuous updates.
  • Humans instead learn continuously and online in sequence, leveraging immediate and distant memory for prediction.
INSIGHT

Potential of Online In-Sequence Learning

  • Online in-sequence learning uses smaller models updating weights with each new input, improving memory and generalization.
  • These models often outperform transformers on reasoning tasks using brain-inspired and recurrent architectures.
Get the Snipd Podcast app to discover more snips from this episode
Get the app