MLOps.community

Treating Prompt Engineering More Like Code // Maxime Beauchemin // MLOps Podcast #167

14 snips
Jul 25, 2023
Ask episode
Chapters
Transcript
Episode notes
1
Introduction
00:00 • 5min
2
The Challenges of AI in Product Development
05:10 • 3min
3
How to Write a Sequel to a Text Question
08:16 • 4min
4
How to Iterate Over Different Edge Cases in a Test Suite
12:36 • 2min
5
Prompt to My: A Testing Library for Open AI
14:17 • 2min
6
How to Scale Hyper Parameter Tuning ML
15:54 • 2min
7
How to Use a Vector Database to Test Your Prompts
17:39 • 3min
8
Should We Open Source the Full Power of AI?
20:46 • 3min
9
The Future of Chat GPT
23:26 • 2min
10
Open AI and the Chat GPT Revolution
25:24 • 2min
11
Prompt Engineering and Test Driven Development
27:36 • 2min
12
Python Prompts: How to Generate and Track Data in Reports
29:46 • 4min
13
How to Optimize Your Test Suite for Different Use Cases
33:40 • 3min
14
AI and the Future of Prompt Cases
36:49 • 3min
15
How to Monitor a Speed and B Cost in Production
39:32 • 3min
16
How to Run a Test Suite That Runs Better Than the Other One
42:12 • 2min
17
How to Use AI to Leverage Your Product
44:08 • 2min
18
How to Measure the Value of AI in Your Product
46:20 • 3min
19
How LLMs Can Affect Your Products
49:40 • 2min
20
The Future of Prompts
52:09 • 5min
21
How to Fine Tune a Text to Sequel Challenge
56:58 • 4min
22
How to Fine Tune a Model for Each Customer at Pre-Sick
01:00:59 • 2min
23
The Importance of Fine Tuning
01:02:32 • 3min
24
The Immutable Models of Machine Learning
01:05:02 • 3min
25
The Cognitive Weight of Model Training
01:08:11 • 2min
26
How to Write a Test Sweet and Report a Prompt
01:10:26 • 3min