The challenge is, what are there's a lot of private context here? And clearly that doesn't fit in a prompt. So then the other approach like, can we just fine tune a month? I create a corpus of 10,000 tables and throw my dbt pipelines into it. Then I'll fine tune a model that will have the full context of all my data catalog and more. But if you train a model with a bunch of information, so at OpenAI, they have this sequence of training and the SQL teaching portion of it.
MLOps Coffee Sessions #167 with Maxime Beauchemin, Treating Prompt Engineering More Like Code.
// Abstract
Promptimize is an innovative tool designed to scientifically evaluate the effectiveness of prompts. Discover the advantages of open-sourcing the tool and its relevance, drawing parallels with test suites in software engineering. Uncover the increasing interest in this domain and the necessity for transparent interactions with language models. Delve into the world of prompt optimization, deterministic evaluation, and the unique challenges in AI prompt engineering.
// Bio
Maxime Beauchemin is the founder and CEO of Preset, a series B startup supporting and commercializing the Apache Superset project. Max was the original creator of Apache Airflow and Apache Superset when he was at Airbnb. Max has over a decade of experience in data engineering, at companies like Lyft, Airbnb, Facebook, and Ubisoft.
// MLOps Jobs board
https://mlops.pallet.xyz/jobs
// MLOps Swag/Merch
https://mlops-community.myshopify.com/
// Related Links
Max's first MLOps Podcast episode: https://go.mlops.community/KBnOgN
Test-Driven Prompt Engineering for LLMs with Promptimize blog: https://maximebeauchemin.medium.com/mastering-ai-powered-product-development-introducing-promptimize-for-test-driven-prompt-bffbbca91535https://maximebeauchemin.medium.com/mastering-ai-powered-product-development-Test-Driven Prompt Engineering for LLMs with Promptimize podcast: https://talkpython.fm/episodes/show/417/test-driven-prompt-engineering-for-llms-with-promptimizeTaming AI Product Development Through Test-driven Prompt Engineering // Maxime Beauchemin // LLMs in Production Conference lightning talk: https://home.mlops.community/home/videos/taming-ai-product-development-through-test-driven-prompt-engineering
--------------- ✌️Connect With Us ✌️ -------------
Join our slack community: https://go.mlops.community/slack
Follow us on Twitter: @mlopscommunity
Sign up for the next meetup: https://go.mlops.community/register
Catch all episodes, blogs, newsletters, and more: https://mlops.community/
Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/
Connect with Max on LinkedIn: https://www.linkedin.com/in/maximebeauchemin/
Timestamps:
[00:00] Max introducing the Apache Superset project at Preset
[01:04] Max's preferred coffee
[01:16] Airflow creator
[01:45] Takeaways
[03:53] Please like, share, and subscribe to our MLOps channels!
[04:31] Check Max's first MLOps Podcast episode
[05:20] Promptimize
[06:10] Interaction with API
[08:27] Deterministic evaluation of SQL queries and AI
[12:40] Figuring out the right edge cases
[14:17] Reaction with Vector Database
[15:55] Promptomize Test Suite
[18:48] Promptimize vision
[20:47] The open-source blood
[23:04] Impact of open source
[23:18] Dangers of open source
[25:25] AI-Language Models Revolution
[27:36] Test-driven design
[29:46] Prompt tracking
[33:41] Building Test Suites as Assets
[36:49] Adding new prompt cases to new capabilities
[39:32] Monitoring speed and cost
[44:07] Creating own benchmarks
[46:19] AI feature adding more value to the end users
[49:39] Perceived value of the feature
[50:53] LLMs costs
[52:15] Specialized model versus Generalized model
[56:58] Fine-tuning LLMs use cases
[1:02:30] Classic Engineer's Dilemma
[1:03:46] Build exciting tech that's available
[1:05:02] Catastrophic forgetting
[1:10:28] Promt driven development
[1:13:23] Wrap up