Jim Olson, CTO of ModelOp, specializes in generative AI governance and regulations. He discusses the importance of monitoring and inventory for compliance in high-risk areas like healthcare. Olson emphasizes the need for technical controls to manage data governance and the continuous monitoring of AI models to detect issues. He addresses the balance between innovation and regulation, particularly in light of evolving EU regulations, and highlights the necessity of building trust through effective governance solutions.
Governance of generative AI models primarily emphasizes their applications, particularly in high-risk areas like healthcare that need stringent oversight.
Organizations struggle to navigate the complex landscape of evolving regulations, lacking cohesive federal standards which impacts their AI governance policies.
Effective governance requires robust technical controls and ongoing monitoring to prevent risks, such as sensitive data disclosures, especially in regulated sectors.
Deep dives
Understanding AI Governance
Governance for generative AI models pertains more to their applications rather than the individual models themselves. For instance, generating an image for personal use requires minimal governance, as the user has full control over the output. However, uses with significant implications, such as medical diagnostics, demand stricter governance due to their potential risks. The focus on model risk emphasizes the need for governance to align with the specific contexts in which models are employed.
Challenges in Policy Creation
Organizations face difficulties in understanding what specific policies need to be included for AI governance, especially given the evolving nature of regulations. The absence of a cohesive federal standard in the U.S., combined with varied state regulations, complicates compliance efforts. Many entities struggle to locate and assess the AI models embedded in their products, which poses risks particularly in highly regulated fields like healthcare. A proper inventory of AI usage is essential to ensure compliance and mitigate risks associated with data handling.
Techniques for Secure AI Usage
To control potential risks associated with generative AI, organizations must establish robust technical controls and data governance practices. Techniques such as token masking and context filtering can prevent sensitive information from being transmitted during API requests. Additionally, ongoing monitoring processes should be in place to detect any inappropriate disclosures of sensitive information. This is especially pertinent in sectors like healthcare, where compliance with regulations like HIPAA is critical.
The Balance of Innovation and Risk
Companies are currently navigating the balance between embracing the innovative capabilities of generative AI and managing associated risks. Many organizations opt for a cautious approach, often deploying internal solutions that involve a 'human in the loop' model to supervise AI interactions. This allows for the advantages of AI while ensuring a layer of protection against potential errors or biases in decision-making. The gradual integration of AI technologies signifies a learning journey for organizations as they adapt to the landscape and understand the implications of AI in business.
The Need for Robust Governance Tools
As the generative AI landscape continues to evolve, the demand for effective governance solutions and tools is becoming increasingly apparent. Companies require reliable mechanisms to monitor AI model performance against established use cases and regulations. Enhancements in tooling for debugging and controlling AI flows are necessary to prevent operational mishaps. The future of AI governance lies in establishing frameworks that blend technical controls with business needs, ensuring models perform ethically and effectively.
Summary In this episode of the AI Engineering Podcast Jim Olsen, CTO of ModelOp, talks about the governance of generative AI models and applications. Jim shares his extensive experience in software engineering and machine learning, highlighting the importance of governance in high-risk applications like healthcare. He explains that governance is more about the use cases of AI models rather than the models themselves, emphasizing the need for proper inventory and monitoring to ensure compliance and mitigate risks. The conversation covers challenges organizations face in implementing AI governance policies, the importance of technical controls for data governance, and the need for ongoing monitoring and baselines to detect issues like PII disclosure and model drift. Jim also discusses the balance between innovation and regulation, particularly with evolving regulations like those in the EU, and provides valuable perspectives on the current state of AI governance and the need for robust model lifecycle management.
Announcements
Hello and welcome to the AI Engineering Podcast, your guide to the fast-moving world of building scalable and maintainable AI systems
Your host is Tobias Macey and today I'm interviewing Jim Olsen about governance of your generative AI models and applications
Interview
Introduction
How did you get involved in machine learning?
Can you describe what governance means in the context of generative AI models? (e.g. governing the models, their applications, their outputs, etc.)
Governance is typically a hybrid endeavor of technical and organizational policy creation and enforcement. From the organizational perspective, what are some of the difficulties that teams are facing in understanding what those policies need to encompass?
How much familiarity with the capabilities and limitations of the models is necessary to engage productively with policy debates?
The regulatory landscape around AI is still very nascent. Can you give an overview of the current state of legal burden related to AI?
What are some of the regulations that you consider necessary but as-of-yet absent?
Data governance as a practice typically relates to controls over who can access what information and how it can be used. The controls for those policies are generally available in the data warehouse, business intelligence, etc. What are the different dimensions of technical controls that are needed in the application of generative AI systems?
How much of the controls that are present for governance of analytical systems are applicable to the generative AI arena?
What are the elements of risk that change when considering internal vs. consumer facing applications of generative AI?
How do the modalities of the AI models impact the types of risk that are involved? (e.g. language vs. vision vs. audio)
What are some of the technical aspects of the AI tools ecosystem that are in greatest need of investment to ease the burden of risk and validation of model use?
What are the most interesting, innovative, or unexpected ways that you have seen AI governance implemented?
What are the most interesting, unexpected, or challenging lessons that you have learned while working on AI governance?
What are the technical, social, and organizational trends of AI risk and governance that you are monitoring?
From your perspective, what are the biggest gaps in tooling, technology, or training for AI systems today?
Closing Announcements
Thank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.
Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
If you've learned something or tried out a project from the show then tell us about it! Email hosts@aiengineeringpodcast.com with your story.
To help other people find the show please leave a review on iTunes and tell your friends and co-workers.