Beena Ammanath, Global Head of the Deloitte AI Institute, discusses the importance of self-regulation and building trustworthy AI, exploring dimensions of trustworthiness, bias, fairness, and reliability in AI systems. The podcast also delves into operationalizing AI risks and the evolving regulatory landscape for AI.
Read more
AI Summary
AI Chapters
Episode notes
auto_awesome
Podcast summary created with Snipd AI
Quick takeaways
Trust is crucial for the success of AI products, and organizations should implement self-regulations to ensure trustworthiness.
Involving stakeholders from different departments and considering specific use cases are key to addressing dimensions like fairness and bias in AI.
Deep dives
The Importance of Trustworthy AI
In this podcast episode, Adele interviews Bina Amunath, a technology trust and ethics leader at Deloitte, about the importance of building trustworthy AI. Bina emphasizes the significance of AI regulation and the need for organizations to implement self-regulations. She highlights that trust is a key factor in the success of AI products and discusses the core principles of Trustworthy AI, such as ethics, responsibility, accountability, and transparency. Bina also explores the dimensions of trustworthy AI, including fairness and bias, robustness and reliability, transparency, privacy, and safety and security. She emphasizes the need to consider these dimensions based on the specific use case and to involve stakeholders from various departments to make informed decisions about risk tolerance and accountability. Overall, Bina emphasizes that responsible and trustworthy AI is crucial for building trust, protecting privacy, and mitigating risks.
The Interplay of AI Ethics Across Industries
Bina discusses the interplay of ethics in AI across various industries. She acknowledges that the relevance of dimensions like fairness and bias may vary based on the use case. For example, fairness and bias are crucial in scenarios involving consumer-facing applications, but may be less applicable in areas like predicting machine failure. Bina highlights the importance of identifying the relevant dimensions for each use case and defining the accepted level of bias or other factors. Furthermore, she emphasizes the need to involve key stakeholders from different departments, such as business leadership, legal and compliance, risk management, and brand protection, to make informed decisions about these dimensions and ensure that potential risks are properly addressed.
Operationalizing Trustworthy AI
Bina explores how organizations can operationalize trustworthy AI. She recommends establishing a cross-functional steering committee of senior leaders to discuss and define metrics and risk tolerance for trustworthy AI. Bina suggests providing AI ethics training to all employees to ensure a common understanding of ethical principles and responsibilities. She also highlights the importance of integrating risk discussions into project management processes and creating channels for employees to raise concerns and seek guidance. Bina emphasizes the need for organizations to consider the ethical implications of AI throughout the entire development cycle, from the conception of an idea to the retirement of a model. By proactively addressing and mitigating risks, organizations can build responsible and trustworthy AI solutions.
The Future of Trustworthy AI
Bina provides insights into the future of trustworthy AI. She predicts a burst of generative AI use cases across different industries and anticipates growing awareness of risk factors and best practices. Bina also discusses the increasing attention on AI regulation and the need for organizations to proactively implement self-regulations. While acknowledging that regulations will differ based on industries and use cases, she emphasizes the importance of organizations taking responsibility and considering the long-term impacts of AI. Bina encourages organizations to prioritize trust and ethics in their AI initiatives and leverage technological advancements, like generative AI, to enhance explainability and transparency. By doing so, organizations can create AI systems that benefit humanity and build trust with end-users.
Throughout the past year, we've seen AI go from a nice-to-have, to a must-have in almost every large organization’s boardroom. There’s been more and more focus deploy AI by leadership teams, and as a result, there's never been more pressure on the data team to deliver with AI. However, as the pressure to deliver with AI grows, the need to build safe and trustworthy experiences has also never been more important. But how do we balance between innovation and building these trustworthy experiences? How do you make responsible AI practical? Who should we get into the room when scoping safe AI use-cases?
Beena Ammanath is an award- winning senior technology executive with extensive experience in AI and digital transformation. Her career has spanned leadership roles in e-commerce, finance, marketing, telecom, retail, software products, service, and industrial domains. She is also the author of the ground breaking book, Trustworthy AI.
Beena currently leads the Global Deloitte AI Institute and Trustworthy AI/ Ethical Technology at Deloitte. Prior to this, she was the CTO-AI at Hewlett Packard Enterprise. A champion for women and multicultural inclusion in technology and business, Beena founded Humans for AI, a 501c3b non-profit promoting diversity and inclusion in AI. Her work and contributions have been acknowledged with numerous awards and recognition such as 2016 Women Super Achiever Award from World Women’s Leadership Congress and induction into WITI’s 2017 Women in Technology Hall of Fame.
Beena was honored by UC Berkeley as 2018 Woman of the Year for Business Analytics, by the San Francisco Business Times as one of the 2017 Most Influential Women in Bay Area and by the National Diversity Council as one of the Top 50 Multicultural Leaders in Tech.
In the episode, Beena and Adel delve into the core principles of trustworthy AI, the interplay of ethics and AI in various industries, how to make trustworthy AI practical, who are the primary stakeholders for ensuring trustworthy AI, the importance of AI literacy when promoting responsible and trustworthy AI, and a lot more.