

MLOps.community
Demetrios
Relaxed Conversations around getting AI into production, whatever shape that may come in (agentic, traditional ML, LLMs, Vibes, etc)
Episodes
Mentioned books

Feb 15, 2022 • 47min
Practitioners Guide to MLOps // Donna Schut and Christos Aniftos // Coffee Sessions #82
MLOps Coffee Sessions #82 with Donna Schut and Christos Aniftos, Practitioners Guide to MLOps.
// Abstract
The "Practitioners Guide to MLOps" introduced excellent frameworks for how to think about the field. Can we talk about how you've seen the advice in that guide applied to real-world systems? Is there additional advice you'd add to that paper based on what you've seen since its publication and with new tools being introduced?
Your article about selecting the right capabilities has a lot of great advice. It would be fun to walk through a hypothetical company case and talk about how to apply that advice in a real-world setting.
GCP has had a lot of new offerings lately, including Vertex AI. It would be great to talk through what's new and what's coming down the line. Our audience always loves hearing how tool providers like GCP think about the problems customers face and how tools are correspondingly developed.
// Bio
Donna Schut
Donna is a Solutions Manager at Google Cloud, responsible for designing, building, and bringing to market smart analytics and AI solutions globally. She is passionate about pushing the boundaries of our thinking with new technologies and creating solutions that have a positive impact. Previously, she was a Technical Account Manager, overseeing the delivery of large-scale ML projects, and part of the AI Practice, developing tools, processes, and solutions for successful ML adoption. She managed and co-authored Google Cloud’s AI Adoption Framework and Practitioners' Guide to MLOps.
Christos Aniftos
Christos is a machine learning engineer with a focus on the end-to-end ML ecosystem. On a typical day, Christos helps Google customers productionize their ML workloads using Google Cloud products and services with special attention on scalable and maintainable ML environments.
Christos made his ML debut in 2010 while working at DigitalMR, where he led a team of data scientists and developers to build a social media monitoring & analytics tool for the Market Research sector.
// Related links:
Select the Right MLOps Capabilities for Your ML Usecase
https://cloud.google.com/blog/products/ai-machine-learning/select-the-right-mlops-capabilities-for-your-ml-use-case
Practitioner's Guide to MLOps white paper
https://services.google.com/fh/files/misc/practitioners_guide_to_mlops_whitepaper.pdf
--------------- ✌️Connect With Us ✌️ -------------
Join our slack community: https://go.mlops.community/slack
Follow us on Twitter: @mlopscommunity
Sign up for the next meetup: https://go.mlops.community/register
Catch all episodes, blogs, newsletter and more: https://mlops.community/
Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/
Connect with Vishnu on LinkedIn: https://www.linkedin.com/in/vrachakonda/
Connect with Donna on LinkedIn: https://www.linkedin.com/in/donna-schut/
Connect with Christos on LinkedIn: https://www.linkedin.com/in/aniftos/
Timestamps:
[00:00] Introduction to Donna Schut and Christos Aniftos
[05:52] Inspiration of Practitioner's Guide to MLOps paper
[06:57] Model for working with customers
[08:14] Where are we at MLOps?
[10:20] Process of working with customers
[11:30] Overview of processes and capabilities outlined in Practitioner's Guide to MLOps paper
[16:16] Continuous Training maturity levels
[22:37] Context about the discovery process
[25:21] Disciplines and security mix tend to see
[26:12] Is there a level up in maturity?
[29:50] Success or failures that stand out
[38:00] War stories
[43:16] Internal study of qualities of the best ML engineers

Feb 14, 2022 • 49min
Investing in MLOps // Leigh Marie Braswell and Davis Treybig // MLOps Coffee Sessions #81
MLOps Coffee Sessions #81 with Davis Treybig and Leigh Marie Braswell, Machine Learning from the Viewpoint of Investors.
// Abstract
Machine learning is a rapidly evolving space that can be hard to keep track of. Every year, thousands of research papers are published in the space, and hundreds of new companies are built both in applied machine learning as well as in machine learning tooling.
In this podcast, we interview two investors who focus heavily on machine learning to get their take on the state of the machine learning industry today: Leigh-Marie Braswell at Founders Fund and Davis Treybig at Innovation Endeavors. We discuss their perspectives on opportunities within MLOps and applied machine learning, common pitfalls and challenges seen in machine learning startups, and new projects they find exciting and interesting in the space.
// Bio
Davis Treybig
Davis (email: davis@innovationendeavors.com) is currently a principal on the investment team at Innovation Endeavors, an early-stage venture firm focused on highly technical companies. He primarily focuses on software infrastructure, especially data tooling and security. Prior to Innovation Endeavors, Davis was a product manager at Google, where he worked on the Pixel phone and the developer platform for the Google Assistant. Davis studied computer science and electrical engineering in college.
Leigh Marie Braswell
Leigh Marie (Twitter: @LM_Braswell) is an investor at Founders Fund. Before joining Founders Fund, she was an early engineer & the first product manager at Scale AI, where she originally built & later led product development for the LiDAR/3D annotation products, used by many autonomous vehicles, robots, and AR/VR companies as a core step in their machine learning lifecycles. She also has done software development at Blend, machine learning at Google, and quantitative trading at Jane Street.
--------------- ✌️Connect With Us ✌️ -------------
Join our slack community: https://go.mlops.community/slack
Follow us on Twitter: @mlopscommunity
Sign up for the next meetup: https://go.mlops.community/register
Catch all episodes, blogs, newsletter and more: https://mlops.community/
Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/
Connect with Leigh on LinkedIn: https://www.linkedin.com/in/leigh-marie-braswell/
Connect with Davis on LinkedIn: https://www.linkedin.com/in/davistreybig/

Feb 8, 2022 • 42min
The Journey from Data Scientist to MLOps Engineer // Ale Solano // MLOps Coffee Sessions #80
MLOps Coffee Sessions #80 with Ale Solano, The Journey from Data Scientist to MLOps Engineer.
// Abstract
After years of failed POCs then all of a sudden one of our models is accepted and will be used in production. The next morning we are part of the main scrum stand-up meeting and a DevOps guy is assisting us. A strange feeling, unknown to us until then, starts growing on the AI team: we are useful!
Deploying models to production is challenging, but MLOps is more than that. MLOps is about making an AI team useful and iterative from the beginning. And it requires a role that takes care of the technical challenges that this implies, given the experimental nature of the ML field, while also serving the product and business needs. If your AI team does not include this role, maybe it's your time to step up and do it yourself! Today, we will chat with Ale about the transition from being a data scientist to a self-called MLOps engineer. And yes, you'll need to study computer science.
// Bio
Ale is born and raised in a mid-small town near Malaga in southern Spain. Ale did his bachelor's degree in robotics because it sounded cool and then he got into machine learning because it was even cooler.
Ale worked in two companies as an ML developer. Now he's on a temporary hiatus to study business and computer science and get a motivation boost.
--------------- ✌️Connect With Us ✌️ -------------
Join our slack community: https://go.mlops.community/slack
Follow us on Twitter: @mlopscommunity
Sign up for the next meetup: https://go.mlops.community/register
Catch all episodes, blogs, newsletter and more: https://mlops.community/
Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/
Connect with Adam on LinkedIn: https://www.linkedin.com/in/aesroka/
Connect with Ale on LinkedIn: https://www.linkedin.com/in/alesolano/

17 snips
Feb 4, 2022 • 52min
Platform Thinking: A Lemonade Case Study // Orr Shilon // MLOps Coffee Sessions #79
MLOps Coffee Sessions #79 with Orr Shilon, Platform Thinking: A Lemonade Case Study.
// Abstract
This episode is the epitome of why people listen to our podcast. It’s a complete discussion of the technical, organizational, and cultural challenges of building a high-velocity, machine learning platform that impacts core business outcomes.
Orr tells us about the focus on automation and platform thinking that’s uniquely allowed Lemonade’s engineers to make long-term investments that have paid off in terms of efficiency. He tells us the crazy story of how the entire data science team of 20+ people was supported by only 2 ML engineers at one point, demonstrating the leverage their technical strategy has given engineers.
// Bio
Orr is an ML Engineering Team Lead at Lemonade, currently working an ML Platform, empowering Data Scientists to manage the ML lifecycle from research to development and monitoring.
Previously, Orr worked at Twiggle on semantic search, at Varonis on data governance, and at Intel. He holds a B.Sc. in Computer Science and Psychology from Tel Aviv University.
Orr also enjoys trail running and sometimes races competitively.
--------------- ✌️Connect With Us ✌️ -------------
Join our slack community: https://go.mlops.community/slack
Follow us on Twitter: @mlopscommunity
Sign up for the next meetup: https://go.mlops.community/register
Catch all episodes, blogs, newsletter and more: https://mlops.community/
Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/
Connect with Vishnu on LinkedIn: https://www.linkedin.com/in/vrachakonda/
Connect with Orr on LinkedIn: https://www.linkedin.com/in/orrshilon/

Jan 31, 2022 • 50min
Calibration for ML at Etsy - apply() special // Erica Greene and Seoyoon Park // MLOps Coffee Sessions #78
MLOps Coffee Sessions #78 with Erica Greene and Seoyoon Park, Calibration for ML at Etsy - apply() special.
// Abstract
This is a special conversation about Machine Learning calibration at Etsy. Demetrios sat down with Erica Greene and Seoyoon Park to hear about how they implemented Calibration into the Etsy Machine Learning workflow.
The conversation is a pre-chat with these two before their presentation at the apply() conference on February 10th.
Register here: applyconf.com
// Bio
Erica Geen
Erica is an engineering manager with a background in machine learning. She's passionate about developing programs and policies that support women and other underrepresented groups in technology.
Seoyoon Park
Backend software engineer and aspiring software architect interested in producing scalable, performant, and fault-tolerant applications by keeping up to date with best practices and industry standards. Seoyoon strives to better himself and his peers by advocating for frequent knowledge transfers and promoting a culture of continuous learning. Constantly looking for opportunities to grow as a developer and become a leader of the industry.
--------------- ✌️Connect With Us ✌️ -------------
Join our slack community: https://go.mlops.community/slack
Follow us on Twitter: @mlopscommunity
Sign up for the next meetup: https://go.mlops.community/register
Catch all episodes, blogs, newsletter and more: https://mlops.community/
Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/
Connect with Erica on LinkedIn: https://www.linkedin.com/in/ericagreene/
Connect with Seoyoon on LinkedIn: https://www.linkedin.com/in/seoyoonpark/

Jan 28, 2022 • 57min
Data Mesh - The Data Quality Control Mechanism for MLOps? // Scott Hirleman // MLOps Coffee Sessions #77
MLOps Coffee Sessions #77 with Scott Hirleman, Data Mesh - The Data Quality Control Mechanism for MLOps?
// Abstract
Scott covers what is a data mesh at a high level for those not familiar. Data mesh is potentially a great win for ML/MLOps as there is very clear guidance on creating useful, clean, well-documented/described and interoperable data for "unexpected use". So instead of data spelunking being a harrowing task, it can be a very fruitful one. And that one data set that was so awesome?
Well, it wasn't a one-off, it's managed as a product with regular refreshes! And there is a LOT more ownership/responsibility on data producers to make sure the downstream doesn't break. Might sound like kumbaya for MLOps (or total BS?) re far cleaner data and fewer upstream breaks so let's discuss the realities and limitations!
// Bio
A self-professed "chaotic (mostly) good character", Scott is focused on helping the data mesh community accelerate towards finding solutions for some of data management's hardest challenges. He founded the Data Mesh Learning community specifically to gather enough people to exchange ideas - much of which is patterned off the MLOps community. He hosts the Data Mesh Radio podcast, where he dives deep into topics related to data mesh to provide the data community with useful perspectives and thoughts on data mesh.
--------------- ✌️Connect With Us ✌️ -------------
Join our slack community: https://go.mlops.community/slack
Follow us on Twitter: @mlopscommunity
Sign up for the next meetup: https://go.mlops.community/register
Catch all episodes, blogs, newsletter and more: https://mlops.community/
Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/
Connect with Adam on LinkedIn: https://www.linkedin.com/in/aesroka/
Connect with Scott on LinkedIn: https://www.linkedin.com/in/scotthirleman/

Jan 25, 2022 • 51min
Build a Culture of ML Testing and Model Quality // Mohamed Elgendy // MLOps Coffee Sessions #76
MLOps Coffee Sessions #76 with Mohamed Elgendy, Build a Culture of ML Testing and Model Quality.
// Abstract
Machine learning engineers and data scientists spend most of their time testing and validating their models’ performance. But as machine learning products become more integral to our daily lives, the importance of rigorously testing model behavior will only increase.
Current ML evaluation techniques are falling short in their attempts to describe the full picture of model performance. Evaluating ML models by only using global metrics (like accuracy or F1 score) produces a low-resolution picture of a model’s performance and fails to describe the model performance across types of cases, attributes, scenarios.
It is rapidly becoming vital for ML teams to have a full understanding of when and how their models fail and to track these cases across different model versions to be able to identify regression. We’ve seen great results from teams implementing unit and functional testing techniques in their model testing. In this talk, we’ll cover why systematic unit testing is important and how to effectively test ML system behavior.
// Bio
Mohamed is the Co-founder & CEO of Kolena and the author of the book “Deep Learning for Vision Systems”. Previously, he built and managed AI/ML organizations at Amazon, Twilio, Rakuten, and Synapse. Mohamed regularly speaks at AI conferences like Amazon's DevCon, O'Reilly's AI conference, and Google's I/O.
--------------- ✌️Connect With Us ✌️ -------------
Join our slack community: https://go.mlops.community/slack
Follow us on Twitter: @mlopscommunity
Sign up for the next meetup: https://go.mlops.community/register
Catch all episodes, blogs, newsletter and more: https://mlops.community/
Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/
Connect with Adam on LinkedIn: https://www.linkedin.com/in/aesroka/
Connect with Mohamed on LinkedIn: https://www.linkedin.com/in/moelgendy/

20 snips
Jan 21, 2022 • 57min
Towards Observability for ML Pipelines // Shreya Shankar // MLOps Coffee Sessions #75
MLOps Coffee Sessions #75 with Shreya Shankar, Towards Observability for ML Pipelines.
// Abstract
Achieving observability in ML pipelines is a mess right now. We are tracking thousands of means, percentiles, and KL divergences of features and outputs in a haphazard attempt to figure out when and how to retrain models.
In this session, we break down current unsuccessful approaches and discuss the path towards effectively maintaining ML models in production. Along the way, we introduce mltrace -- a preliminary open source project striving towards "bolt-on" observability in ML pipelines.
// Bio
Shreya Shankar is a computer scientist living in the Bay Area. She's interested in building systems to operationalize machine learning workflows. Shreya's research focus is on end-to-end observability for ML systems, particularly in the context of heterogeneous stacks of tools.
Currently, Shreya is doing her Ph.D. in the RISE lab at UC Berkeley. Previously, she was the first ML engineer at Viaduct, did research at Google Brain, and completed her BS and MS in computer science at Stanford University.
// Related Links
Shreya Shankar's blogposts: https://www.shreya-shankar.com/
Shreya Shankar's Podcasts: https://www.listennotes.com/top-episodes/shreya-shankar/
The deployment phase of machine learning by Benedict Evans: https://www.ben-evans.com/benedictevans/2019/10/4/machine-learning-deployment
Shreya Shrankar's mltrace blogpost: https://www.shreya-shankar.com/introducing-mltrace/
--------------- ✌️Connect With Us ✌️ -------------
Join our slack community: https://go.mlops.community/slack
Follow us on Twitter: @mlopscommunity
Sign up for the next meetup: https://go.mlops.community/register
Catch all episodes, blogs, newsletters, and more: https://mlops.community/
Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/
Connect with Vishnu on LinkedIn: https://www.linkedin.com/in/vrachakonda/
Connect with Shreya on LinkedIn: https://www.linkedin.com/in/shrshnk
Timestamps:
[00:00] Introduction to Shreya Shankar
[01:12] Shreya's background
[03:22] Contrast in scale influence
[05:28] Embedding ML and building machine learning infused products
[07:26] Management structure and professional incentive
[08:25] Organizational side of MLOps retros
[10:15] Tooling implementations
[12:00] Structured rational investment hardships
[13:17] Working at a start-up
[14:02] Academic work and entrepreneurial ambitions
[16:00] ML Monitoring Observability interest
[17:14] Where to get started
[20:47] Realization while at Viaduct
[23:30] Preventing alert fatigue
[27:04] Tooling bridging the gap
[30:40] Juncture at overall MLOps ecosystem
[33:58] The deployment phase of machine learning - it's the new SQL by Benedict Evans
[35:30] Model monitoring
[36:16] mltrace
[38:28] Introducing mltrace blog post series
[41:25] Tips to our content creators/writers
[43:47] Monitoring through the lens of the database
[47:37] Advice about picking up ML engineering and ML systems development in 2022
[49:36] Database low down the stack
[50:51] Most excited about 2022
[52:13] What MLOps space/ecosystem should change?
[53:21] Funding has changed the incentives around innovation
[54:52] Competition in million-dollar rounds
[55:25] Starting a company
[56:30] Wrap up

Jan 19, 2022 • 51min
Scaling Biotech // Jesse Johnson // MLOps Coffee Sessions #74
MLOps Coffee Sessions #74 with Jesse Johnson, Scaling Biotech.
// Abstract
Scaling a biotech research platform requires managing organization complexity - teams, functions, projects - rather than just the traditional volume, velocity, and variety. By examining the processes and experiments that drive the platform, you can focus your work where it matters the most by finding the ideal balance for each type of experiment along with a number of common trade-offs.
// Bio
Jesse Johnson is head of Data Science and Data Engineering at Dewpoint Therapeutics, an R&D-stage biotech startup. His interest in exploring complex systems, understanding what makes them tick, then using this understanding to improve and scale them led him from academic mathematics, into software engineering (Google, Verily Life Sciences), and then to Biotech (Sanofi, Cellarity, Dewpoint). His goal is to identify ways to scale biotech research through better software and organizational design.
// Related Links
Jessie's blogposts: scalingbiotech.com
--------------- ✌️Connect With Us ✌️ -------------
Join our slack community: https://go.mlops.community/slack
Follow us on Twitter: @mlopscommunity
Sign up for the next meetup: https://go.mlops.community/register
Catch all episodes, blogs, newsletters, and more: https://mlops.community/
Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/
Connect with Vishnu on LinkedIn: https://www.linkedin.com/in/vrachakonda/
Connect with Jesse on LinkedIn: https://www.linkedin.com/in/jesse-johnson-51619a7/
Timestamps:
[00:00] Introduction to Jesse Johnson
[05:10] Jesse's background
[05:52] Biotech environments
[06:31] Jesse's background in Biotech companies
[09:21] Jesse's journey from academic to software engineering
[12:20] Transition from primary output insights/research into writing code
[14:54] Actual hands-on use case in practice
[19:19] Jesse's career trajectory
[23:57] Where we're at state-of-the-art data engineering and its outstanding challenges
[26:50] Dewpoint's data and machine learning challenges and tooling
[29:04] Dewpoint's team structure
[30:20] Jesse being the VP of Data Science and Data Engineering
[33:24] New biotech data makes it hard to design a data platform
[35:35] Changes in how biotech data is viewed
[35:54] Experiment data output
[40:19] Solving challenges in structuring real-world context into interpretable data fields
[44:16] Maturity between the current data engineering and MLOps tooling space
[47:31] Achieving a blogpost mission in 2022
[49:50] Wrap up

Jan 7, 2022 • 53min
On Structuring an ML Platform 1 Pizza Team //Breno Costa & Matheus Frata //MLOps Coffee Sessions #73
MLOps Coffee Sessions #73 with Breno Costa and Matheus Frata, On Structuring an ML Platform 1 Pizza Team.
// Abstract
Breno and Matheus were part of an organizational change at Neoway in recent years. With the creation of cross-functional and platform teams in order to improve the value stream generated by these. They share their experience in creating a machine learning platform team. The challenges they faced along the way, how they approached using product thinking and the results achieved so far.
// Bio
Matheus Frata Matheus is an Electronics Engineer that got into Data Science by accident! During his graduation, Matheus joined Neoway as a Data Scientist, but during that time he saw a lot of problems that were related to engineers! This was Matheus' beginning with MLOPS. Today, Matheus works as a Machine Learning Engineer helping their Data Scientists to FLY!!!
Breno Costa
Breno uses his mixed background in Computer Science and Mathematical Modeling to design and develop ML-based software products. A brief period as an entrepreneur gives a different look at how to approach problems and generate more value. He has worked at Neoway for three years and currently works as a machine learning engineer on the Platform team.
// Related links
https://mlops.community/building-neoways-ml-platform-with-a-team-first-approach-and-product-thinking/
--------------- ✌️Connect With Us ✌️ -------------
Join our slack community: https://go.mlops.community/slack
Follow us on Twitter: @mlopscommunity
Sign up for the next meetup: https://go.mlops.community/register
Catch all episodes, blogs, newsletter and more: https://mlops.community/
Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/
Connect with Vishnu on LinkedIn: https://www.linkedin.com/in/vrachakonda/
Connect with Breno on LinkedIn: https://www.linkedin.com/in/breno-c-costa/
Connect with Matheus on LinkedIn: https://www.linkedin.com/in/matheus-frata/
Timestamps:
[00:00] Introduction to Breno Costa & Matheus Frata
[02:08] Breno's background in Neoway
[03:23] What does Neoway do and Matheus' background in Neoway
[05:43] Organizational structure of Neoway
[07:31] Concept of redesign
[10:47] Getting the structure right as a priority
[15:26] Designing the teams
[20:28] Three different ways of setting up the cells interaction
[23:58] Platform differences
[25:33] Technical components before redesigning and organizational overhauling
[31:50] Supporting platform teams
[33:23] Settling tech stack managing technical needs
[42:10] Building internal tools
[50:10] Wrap up