#179 Why ML Projects Fail, and How to Ensure Success with Eric Siegel, Founder of Machine Learning Week, Former Columbia Professor, and Bestselling Author
Eric Siegel, founder of Machine Learning Week and best selling author discusses why ML projects fail and how to ensure success, the challenges in machine learning project deployment, collaboration challenges between data and business teams, leveraging machine learning for route optimization with UPS, the importance of change management and skills transformation, and the hype and potential dangers of generative AI.
Read more
AI Summary
AI Chapters
Episode notes
auto_awesome
Podcast summary created with Snipd AI
Quick takeaways
Collaboration between data teams and business stakeholders is crucial for successful machine learning deployment and requires a standardized practice known as BizML.
Business stakeholders should focus on quantifying the impact of incremental improvements in machine learning performance using business metrics, bridging the gap between technical and business perspectives.
Deep dives
Importance of Collaboration and Scoping Machine Learning Use Cases with Business Stakeholders
Machine learning use cases need to be scoped in collaboration with business stakeholders. This collaboration, also known as BizML, is an organizational issue that requires a standardized practice. The need for a standard business paradigm or playbook is crucial to ensure successful deployment of machine learning projects. It is important for business stakeholders to have a semi-technical knowledge, including understanding what is predicted, how well it is predicted, and what is done about it. By involving business stakeholders from the beginning of the project and speaking the same language, organizations can increase the chances of successful machine learning deployment and value-driven projects.
Importance of Quantifying Business Impact
Quantifying business impact is crucial for machine learning projects. While technical metrics are important, such as precision or recall, they do not fully capture the business impact. Business stakeholders should focus on business metrics that directly relate to organizational success, such as profit, ROI, or customer satisfaction. However, there is often a disconnect between technical metrics and business metrics. Bridging this gap requires business stakeholders to understand and quantify the impact of incremental improvements in machine learning performance, translating technical metrics into tangible business value and outcomes.
The Need for Upskilling and Reskilling
Both data teams and business stakeholders need to engage in upskilling and reskilling efforts to effectively collaborate and leverage the potential of machine learning. Business stakeholders should develop a semi-technical understanding of machine learning concepts, such as what is predicted, how well it is predicted, and what is done about it. This common data language allows for better collaboration and alignment between technical and business perspectives. Additionally, data scientists should also enhance their understanding of business metrics, ensuring that they prioritize business impact alongside technical performance.
Balancing Hype and Reality in Generative AI
Generative AI, such as large language models and image generators, has sparked significant hype. While the technology is impressive, there is a danger of over-hyping and creating unrealistic expectations, particularly around the idea of achieving Artificial General Intelligence (AGI). It is important to recognize the limitations and differences between generative AI and human capabilities. Rather than considering current developments as concrete steps towards AGI, it is crucial to maintain a grounded and realistic approach, focusing on the specific applications and potential benefits of generative AI within specific use cases.
We are in a Generative AI hype cycle. Every executive looking at the potential generative AI today is probably thinking about how they can allocate their department's budget to building some AI use cases. However, many of these use cases won't make it into production.
In a similar vein, the hype around machine learning in the early 2010s led to lots of hype around the technology, but a lot of the value did not pan out. Four years ago, VentureBeat showed that 87% of data science projects did not make it into production. And in a lot of ways, things haven’t gotten much better. And if we don't learn why that is the case, generative AI could be destined to a similar fate.
Eric Siegel, Ph.D., is a leading consultant and former Columbia University professor who helps companies deploy machine learning. He is the founder of the long-running Machine Learning Week conference series and its new sister, Generative AI World, the instructor of the acclaimed online course “Machine Learning Leadership and Practice – End-to-End Mastery,” executive editor of The Machine Learning Times, and a frequent keynote speaker. He wrote the bestselling Predictive Analytics: The Power to Predict Who Will Click, Buy, Lie, or Die, as well as The AI Playbook: Mastering the Rare Art of Machine Learning Deployment. Eric’s interdisciplinary work bridges the stubborn technology/business gap. At Columbia, he won the Distinguished Faculty award when teaching graduate computer science courses in ML and AI. Later, he served as a business school professor at UVA Darden. Eric also publishes op-eds on analytics and social justice.
In the episode, Adel and Eric explore the reasons why machine learning projects don't make it into production, the BizML Framework or how to bring business stakeholders into the room when building machine learning use cases, the skill gap between business stakeholders and data practitioners, use cases of organizations have leveraged machine learning for operational improvements, what the previous machine learning hype cycle can teach us about generative AI and a lot more.