LessWrong (Curated & Popular) cover image

LessWrong (Curated & Popular)

“Have LLMs Generated Novel Insights?” by abramdemski, Cole Wyeth

Mar 6, 2025
The discussion revolves around the ability of large language models to generate novel insights. Critics argue that LLMs have yet to prove their worth in significant achievements, like theorem proving or impactful writing. An intriguing anecdote highlights a chemist who received a helpful suggestion from an LLM that resolved a difficult synthesis issue. This juxtaposition raises questions about whether LLMs are genuinely insightful or merely good at predicting outcomes based on existing information.
03:49

Podcast summary created with Snipd AI

Quick takeaways

  • Cole Wyeth claims that LLMs have not produced any significant theorems or enduring content despite their vast information storage.
  • There is potential for LLMs to generate novel insights by synthesizing information in innovative ways, as evidenced by successful predictions.

Deep dives

The Limitations of LLMs in Generating Novel Insights

The discussion centers around the assertion that large language models (LLMs) have failed to produce meaningful novel insights within scientific contexts. Cole Wyeth argues that LLMs have not contributed any significant theorems or written enduring content, despite their vast information reservoir. An anecdote about a chemist illustrates a scenario where an LLM provided a solution to a problem that had no prior online discussion, suggesting that LLMs can generate responses that seem innovative. However, there remains skepticism regarding LLMs' capabilities to perform research that creates truly original knowledge, as many experts predict a lack of substantial contributions from these models in advancing science.

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner