

GEPA with Lakshya A. Agrawal - Weaviate Podcast #127!
10 snips Aug 13, 2025
Lakshya A. Agrawal, a Ph.D. student at U.C. Berkeley, discusses his groundbreaking work on GEPA, an innovative optimizer using Large Language Models (LLMs). He elaborates on three key innovations: Pareto-Optimal Candidate Selection, Reflective Prompt Mutation, and System-Aware Merging. Lakshya explores how these techniques enhance AI efficiency, the importance of incorporating domain knowledge, and the role of benchmarks like LangProBe. He also delves into the future of AI in scientific simulations and the advantages of merging language-based learning with traditional methods.
AI Snips
Chapters
Transcript
Episode notes
Leverage Textual Traces For Learning
- Language models expose rich natural language traces during rollouts that reveal reasoning, errors, and profiler info.
- JEPA leverages those traces to extract learning signal and improve sample efficiency far beyond scalar rewards.
Motivating Project: Slow Hardware Rollouts
- Lakshya started from a code generation project targeting a new slow-to-evaluate hardware architecture that made rollouts expensive.
- That practical pain motivated JEPA to extract more signal from single rollouts via LLM reflection.
Iterative Prompt Pool And Refinement
- Maintain a pool of candidate prompts and iteratively propose a new candidate, testing on minibatches before full evaluation.
- Use reflective mutation and system-aware merging to refine and combine promising instruction lineages.