MLOps.community  cover image

Treating Prompt Engineering More Like Code // Maxime Beauchemin // MLOps Podcast #167

MLOps.community

00:00

Python Prompts: How to Generate and Track Data in Reports

So you derive a class that is expected to take some input and generate a prompt. And then, for each one of these text prompt, there'll be an eval function that assesses whether it might assess like, oh, if I pass these numbers, do I get this number out? So by default, we're going to log all of the, the open AI type parameters, like temperature and how long it took to answer how many tokens were used in the prompt. But you can augment that blob with whatever is relevant to your prompt. Right. Maybe the most valuable thing in the nose times where things are changing so fast is the anchor that tests how well you're doing, right?

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app