The path towards trustworthy AI (Practical AI #293)
Oct 29, 2024
auto_awesome
Elham Tabassi, Chief AI Advisor at the U.S. National Institute of Standards & Technology (NIST), shares insights on the journey to trustworthy AI. They delve into NIST’s AI Risk Management Framework and its connection to a recent White House executive order aimed at enhancing AI safety. Tabassi emphasizes the crucial roles of explainability, reliability, and governance in advancing AI, especially in sensitive fields like healthcare. The conversation highlights the urgency for strong standards to ensure that the rapid advancements in AI remain safe and reliable.
NIST is pivotal in creating trustworthy AI standards, emphasizing stakeholder engagement to address security, privacy, and ethical considerations.
The AI Risk Management Framework (AI RMF) outlines essential characteristics of trustworthy AI, including reliability, accountability, and safety, by merging insights from various disciplines.
Deep dives
Leveraging Postgres for AI Development
Postgres, a powerful open-source database, is being utilized by Timescale to enhance the development of AI applications. Developers can capitalize on their existing knowledge of Postgres to create advanced applications, including time series analytics and AI-related technologies like retrieval-augmented generation (RAG) and search agents. Timescale's PGAI project provides a roadmap for developers to transition into AI engineering without requiring them to learn new technologies. With open-source tools available for local setups, developers can easily experiment and build projects using familiar SQL query languages.
NIST's Role in AI Standards and Trust
The National Institute of Standards and Technology (NIST) plays a crucial role in advancing AI technologies by establishing standards that enhance trust and reliability. NIST emphasizes the importance of stakeholder engagement to develop consensus-driven guidelines that address security, privacy, and ethical considerations in AI systems. Their mission includes fostering trust through robust measurement science and establishing standards that can be applied across various sectors and use cases. By developing frameworks and tools like the AI Risk Management Framework (AI RMF), NIST provides ways for organizations to assess and manage risks associated with AI technologies.
Building Trust through Collaboration
Establishing trust in AI systems requires a collaborative approach that encompasses diverse expertise across different fields. NIST actively engages with various stakeholders, including technologists, economists, and social scientists, to develop comprehensive frameworks that address the complexities of AI deployment. The AI RMF outlines key characteristics of trustworthy AI systems, such as reliability, accountability, and safety, while bringing together insights from multiple disciplines to define what trust means in practice. This process helps create a shared understanding of expectations and trade-offs involved in making AI technologies safe and effective.
Future Directions for AI Measurement and Standards
The ongoing evolution of AI technologies underscores the need for rigorous evaluation and testing methods to ensure system trustworthiness. NIST acknowledges the limitations in current evaluation practices and emphasizes the necessity for better metrics to assess AI systems' performance and trustworthiness. The organization seeks to develop clear standards and protocols for AI measurement that support the growing complexity of AI applications. Through these efforts, NIST aims to foster confidence among users, ensuring that AI systems not only meet performance expectations but also promote safety, fairness, and accountability.
Elham Tabassi, the Chief AI Advisor at the U.S. National Institute of Standards & Technology (NIST), joins Chris for an enlightening discussion about the path towards trustworthy AI. Together they explore NIST’s ‘AI Risk Management Framework’ (AI RMF) within the context of the White House’s ‘Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence’.
Changelog++ members save 10 minutes on this episode because they made the ads disappear. Join today!
Sponsors:
Timescale – Real-time analytics on Postgres, seriously fast. Over 3 million Timescale databases power loT, sensors, Al, dev tools, crypto, and finance apps — all on Postgres. Postgres, for everything.
Retool – The low-code platform for developers to build internal tools — Some of the best teams out there trust Retool…Brex, Coinbase, Plaid, Doordash, LegalGenius, Amazon, Allbirds, Peloton, and so many more – the developers at these teams trust Retool as the platform to build their internal tools. Try it free at retool.com/changelog
DeleteMe – DeleteMe makes it quick, easy and safe to remove your personal data online.