You can effectively know these things in a much clearer way than if you're just kind of evaluating from looking at the answer and that's really hard to scale. It makes complete sense where you're saying alright this one is 75% useful or this prompt is is this good but then when you pointed out a different model it holds up much better or much worse etc etc. get itYeah so you can do it it brings some the mindset of a if you're familiar with hyper parameter tuning ML, right? You want to train a whole like matrix like multi dimensional matrix of model and then compare how they're performing against one another right?" Yeah so you can doing all test cases at the finance code
MLOps Coffee Sessions #167 with Maxime Beauchemin, Treating Prompt Engineering More Like Code.
// Abstract
Promptimize is an innovative tool designed to scientifically evaluate the effectiveness of prompts. Discover the advantages of open-sourcing the tool and its relevance, drawing parallels with test suites in software engineering. Uncover the increasing interest in this domain and the necessity for transparent interactions with language models. Delve into the world of prompt optimization, deterministic evaluation, and the unique challenges in AI prompt engineering.
// Bio
Maxime Beauchemin is the founder and CEO of Preset, a series B startup supporting and commercializing the Apache Superset project. Max was the original creator of Apache Airflow and Apache Superset when he was at Airbnb. Max has over a decade of experience in data engineering, at companies like Lyft, Airbnb, Facebook, and Ubisoft.
// MLOps Jobs board
https://mlops.pallet.xyz/jobs
// MLOps Swag/Merch
https://mlops-community.myshopify.com/
// Related Links
Max's first MLOps Podcast episode: https://go.mlops.community/KBnOgN
Test-Driven Prompt Engineering for LLMs with Promptimize blog: https://maximebeauchemin.medium.com/mastering-ai-powered-product-development-introducing-promptimize-for-test-driven-prompt-bffbbca91535https://maximebeauchemin.medium.com/mastering-ai-powered-product-development-Test-Driven Prompt Engineering for LLMs with Promptimize podcast: https://talkpython.fm/episodes/show/417/test-driven-prompt-engineering-for-llms-with-promptimizeTaming AI Product Development Through Test-driven Prompt Engineering // Maxime Beauchemin // LLMs in Production Conference lightning talk: https://home.mlops.community/home/videos/taming-ai-product-development-through-test-driven-prompt-engineering
--------------- ✌️Connect With Us ✌️ -------------
Join our slack community: https://go.mlops.community/slack
Follow us on Twitter: @mlopscommunity
Sign up for the next meetup: https://go.mlops.community/register
Catch all episodes, blogs, newsletters, and more: https://mlops.community/
Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/
Connect with Max on LinkedIn: https://www.linkedin.com/in/maximebeauchemin/
Timestamps:
[00:00] Max introducing the Apache Superset project at Preset
[01:04] Max's preferred coffee
[01:16] Airflow creator
[01:45] Takeaways
[03:53] Please like, share, and subscribe to our MLOps channels!
[04:31] Check Max's first MLOps Podcast episode
[05:20] Promptimize
[06:10] Interaction with API
[08:27] Deterministic evaluation of SQL queries and AI
[12:40] Figuring out the right edge cases
[14:17] Reaction with Vector Database
[15:55] Promptomize Test Suite
[18:48] Promptimize vision
[20:47] The open-source blood
[23:04] Impact of open source
[23:18] Dangers of open source
[25:25] AI-Language Models Revolution
[27:36] Test-driven design
[29:46] Prompt tracking
[33:41] Building Test Suites as Assets
[36:49] Adding new prompt cases to new capabilities
[39:32] Monitoring speed and cost
[44:07] Creating own benchmarks
[46:19] AI feature adding more value to the end users
[49:39] Perceived value of the feature
[50:53] LLMs costs
[52:15] Specialized model versus Generalized model
[56:58] Fine-tuning LLMs use cases
[1:02:30] Classic Engineer's Dilemma
[1:03:46] Build exciting tech that's available
[1:05:02] Catastrophic forgetting
[1:10:28] Promt driven development
[1:13:23] Wrap up