How AI Is Built

#041 Context Engineering, How Knowledge Graphs Help LLMs Reason

Feb 6, 2025
Robert Caulk, who leads Emergent Methods and has over 1,000 academic citations, dives into the fascinating world of knowledge graphs and their integration with large language models (LLMs). He discusses how these graphs help AI systems connect complex data relationships, enhancing reasoning accuracy. The conversation also touches on the challenges of multilingual entity extraction and the need for context engineering to improve AI-generated content. Additionally, Caulk shares insights into upcoming features for real-time event tracking and the future of project management tools.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Context Engineering Is Feature Engineering

  • Context engineering is feature engineering for LLMs and sets the ceiling for model performance.
  • Clean, structured, and concise inputs improve accuracy and reduce hallucinations.
ADVICE

Favor Simple Functions Over Full Agents

  • Limit agent autonomy and keep functions single-responsibility for maintainability and quality control.
  • Use small LLM-backed functions (e.g., question-tree generator) instead of fully autonomous chains.
ADVICE

Strip Noise Before The Final Call

  • Be concise and remove noise like HTML when preparing context for LLMs to lower hallucinations.
  • Save expensive tokens by cleaning inputs before the final high-cost LLM call.
Get the Snipd Podcast app to discover more snips from this episode
Get the app