Bartosz Mikulski, an MLOps engineer with a rich background in data engineering, dives deep into the realm of AI data management. He highlights the crucial role of data testing in AI applications, especially with the rise of generative AI. Bartosz discusses the need for specialized datasets and the skills required for data engineers to transition into AI. He also addresses challenges like frequent data reprocessing and unstructured data handling, showcasing the evolving responsibilities in this fast-paced field.
Data engineers must adapt to evolving AI demands by mastering skills like data testing, reprocessing, and working with unstructured datasets.
The necessity for specific test datasets in AI applications emphasizes the importance of effective evaluation strategies over traditional training datasets.
Deep dives
Understanding Data Requirements for AI Applications
AI applications require specific types of data assets, primarily focusing on test and evaluation datasets. Unlike traditional data engineering, where training datasets are paramount, generative AI applications rely more on accurately configured evaluation datasets to assess whether functionalities operate correctly. As applications evolve, developers find that each stage of an AI workflow necessitates distinct test datasets to troubleshoot various individual processes. This multiplicative requirement highlights the importance of comprehensive data gathering and the ability to generate realistic testing scenarios, which can significantly streamline the development and deployment of AI applications.
Evolving Roles and Responsibilities in AI Applications
The responsibilities of data engineering and MLOps teams are shifting as they adapt to the requirements of AI applications. While traditionally segmented roles existed, the crossover between data engineers and AI engineers is increasingly prevalent, as both teams must collaborate closely on data collection, model deployment, and performance testing. This transformation requires teams to not only manage structured data but also work with evolving unstructured datasets, which can involve a broader skill set for tasks like testing and extracting insights. Continuous adaptation and learning are necessary for team members to successfully navigate these changes and effectively support AI-driven projects.
Data Management Challenges and Innovations
The introduction of vector databases and embeddings presents unique challenges for data engineers accustomed to structured data. Engineers must now incorporate advanced strategies for data chunking and metadata management, optimizing processes for embedding generation and retrieval. Traditional extract-load-transform (ELT) workflows may fall short when confronted with large volumes of data needing reprocessing, resulting in the necessity for more complex, parallel processing solutions. As teams evolve to meet these demands, refinement in workflow strategies and toolsets becomes essential for ensuring efficient data handling in AI contexts.
Addressing the Skills Gap in AI Data Engineering
A significant skills gap exists between launching AI applications based on simple implementations and developing robust, production-ready systems. The complexity of testing and validating AI systems necessitates that engineers not only build systems but also introduce extensive testing protocols. Understanding the unique workflow demands of AI applications is crucial, as many engineers may not realize the heavy burden of ensuring quality and performance once the system is operational. Bridging this gap requires targeted training and awareness of best practices in AI systems development to equip engineers with the essential skills for success.
Summary In this episode of the Data Engineering Podcast Bartosz Mikulski talks about preparing data for AI applications. Bartosz shares his journey from data engineering to MLOps and emphasizes the importance of data testing over software development in AI contexts. He discusses the types of data assets required for AI applications, including extensive test datasets, especially in generative AI, and explains the differences in data requirements for various AI application styles. The conversation also explores the skills data engineers need to transition into AI, such as familiarity with vector databases and new data modeling strategies, and highlights the challenges of evolving AI applications, including frequent reprocessing of data when changing chunking strategies or embedding models.
Announcements
Hello and welcome to the Data Engineering Podcast, the show about modern data management
Data migrations are brutal. They drag on for months—sometimes years—burning through resources and crushing team morale. Datafold's AI-powered Migration Agent changes all that. Their unique combination of AI code translation and automated data validation has helped companies complete migrations up to 10 times faster than manual approaches. And they're so confident in their solution, they'll actually guarantee your timeline in writing. Ready to turn your year-long migration into weeks? Visit dataengineeringpodcast.com/datafold today for the details.
Your host is Tobias Macey and today I'm interviewing Bartosz Mikulski about how to prepare data for use in AI applications
Interview
Introduction
How did you get involved in the area of data management?
Can you start by outlining some of the main categories of data assets that are needed for AI applications?
How does the nature of the application change those requirements? (e.g. RAG app vs. agent, etc.)
How do the different assets map to the stages of the application lifecycle?
What are some of the common roles and divisions of responsibility that you see in the construction and operation of a "typical" AI application?
For data engineers who are used to data warehousing/BI, what are the skills that map to AI apps?
What are some of the data modeling patterns that are needed to support AI apps?
chunking strategies
metadata management
What are the new categories of data that data engineers need to manage in the context of AI applications?
agent memory generation/evolution
conversation history management
data collection for fine tuning
What are some of the notable evolutions in the space of AI applications and their patterns that have happened in the past ~1-2 years that relate to the responsibilities of data engineers?
What are some of the skills gaps that teams should be aware of and identify training opportunities for?
What are the most interesting, innovative, or unexpected ways that you have seen data teams address the needs of AI applications?
What are the most interesting, unexpected, or challenging lessons that you have learned while working on AI applications and their reliance on data?
What are some of the emerging trends that you are paying particular attention to?
From your perspective, what is the biggest gap in the tooling or technology for data management today?
Closing Announcements
Thank you for listening! Don't forget to check out our other shows. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used. The AI Engineering Podcast is your guide to the fast-moving world of building AI systems.
Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com with your story.