
#528: Python apps with LLM building blocks
Talk Python To Me
00:00
How to key and store LLM responses
They discuss using model, settings, prompt tuples as cache keys and storing multiple outputs for analysis.
Play episode from 26:55
Transcript

They discuss using model, settings, prompt tuples as cache keys and storing multiple outputs for analysis.