

Treating Prompt Engineering More Like Code // Maxime Beauchemin // MLOps Podcast #167
14 snips Jul 25, 2023
Chapters
Transcript
Episode notes
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26
Introduction
00:00 • 5min
The Challenges of AI in Product Development
05:10 • 3min
How to Write a Sequel to a Text Question
08:16 • 4min
How to Iterate Over Different Edge Cases in a Test Suite
12:36 • 2min
Prompt to My: A Testing Library for Open AI
14:17 • 2min
How to Scale Hyper Parameter Tuning ML
15:54 • 2min
How to Use a Vector Database to Test Your Prompts
17:39 • 3min
Should We Open Source the Full Power of AI?
20:46 • 3min
The Future of Chat GPT
23:26 • 2min
Open AI and the Chat GPT Revolution
25:24 • 2min
Prompt Engineering and Test Driven Development
27:36 • 2min
Python Prompts: How to Generate and Track Data in Reports
29:46 • 4min
How to Optimize Your Test Suite for Different Use Cases
33:40 • 3min
AI and the Future of Prompt Cases
36:49 • 3min
How to Monitor a Speed and B Cost in Production
39:32 • 3min
How to Run a Test Suite That Runs Better Than the Other One
42:12 • 2min
How to Use AI to Leverage Your Product
44:08 • 2min
How to Measure the Value of AI in Your Product
46:20 • 3min
How LLMs Can Affect Your Products
49:40 • 2min
The Future of Prompts
52:09 • 5min
How to Fine Tune a Text to Sequel Challenge
56:58 • 4min
How to Fine Tune a Model for Each Customer at Pre-Sick
01:00:59 • 2min
The Importance of Fine Tuning
01:02:32 • 3min
The Immutable Models of Machine Learning
01:05:02 • 3min
The Cognitive Weight of Model Training
01:08:11 • 2min
How to Write a Test Sweet and Report a Prompt
01:10:26 • 3min