

#20069
Mentioned in 2 episodes
Responsible AI
Book • 2024
Olivia Gambelin's "Responsible AI" provides a practical guide for implementing ethical approaches in organizations working with artificial intelligence.
The book explores the human element in AI development, emphasizing the importance of values-driven decision-making.
It offers strategies for aligning organizational practices and people with ethical principles, promoting responsible innovation.
Gambelin's work challenges the notion of imbuing AI with ethics, instead focusing on human responsibility in shaping AI's development and use.
The book serves as a valuable resource for businesses and individuals seeking to navigate the ethical complexities of AI.
The book explores the human element in AI development, emphasizing the importance of values-driven decision-making.
It offers strategies for aligning organizational practices and people with ethical principles, promoting responsible innovation.
Gambelin's work challenges the notion of imbuing AI with ethics, instead focusing on human responsibility in shaping AI's development and use.
The book serves as a valuable resource for businesses and individuals seeking to navigate the ethical complexities of AI.
Mentioned by
Mentioned in 2 episodes
Mentioned by ![undefined]()

as a book written by Brad Smith on embedding responsible AI into technologies.

Bernard Leong

21 snips
How Microsoft Research Balances Exploration and Impact Globally with Doug Burger
Mentioned by ![undefined]()

as the author of the book, discussing AI ethics.

Kimberly Nevala

Ethical by Design with Olivia Gambelin
Mentioned by Ben as a new book by 

on Responsible AI.


Olivia Gambelin

95. Responsible AI strategy with Olivia Gambelin
Mentioned by ![undefined]()

as the author of the book, which focuses on responsible AI implementation.

Scott Miller

Olivia Gambelin: AI Ethics and Building Responsible AI Strategies
Mentioned by ![undefined]()

when discussing the need for responsible AI development.

Mark Surman

EP 148: Safer AI - Why we all need ethical AI tools we can trust