
The Shifting Privacy Left Podcast
S2E26: "Building Ethical Machines" with Reid Blackman, PhD (Virtue Consultants)
This week, I welcome philosopher, author, & AI ethics expert, Reid Blackman, Ph.D., to discuss Ethical AI. Reid authored the book, "Ethical Machines," and is the CEO & Founder of Virtue Consultants, a digital ethical risk consultancy. His extensive background in philosophy & ethics, coupled with his engagement with orgs like AWS, U.S. Bank, the FBI, & NASA, offers a unique perspective on the challenges & misconceptions surrounding AI ethics.
In our conversation, we discuss 'passive privacy' & 'active privacy' and the need for individuals to exercise control over their data. Reid explains how the quest to train data for ML/AI can lead to privacy violations, particularly for BigTech companies. We touch on many concepts in the AI space including: automated decision making vs. keeping "humans in the loop;" combating AI ethics fatigue; and advice for technical staff involved in AI product development. Reid stresses the importance of protecting privacy, educating users, & deciding whether to utilize external APIs or on-prem servers.
We end by highlighting his HBR article - "Generative AI-xiety" - and discuss the 4 primary areas of ethical concern for LLMs:
- the hallucination problem;
- the deliberation problem;
- the sleazy salesperson problem; &
- the problem of shared responsibility
Topics Covered:
- What motivated Reid to write his book, "Ethical Machines"
- The key differences between 'active privacy' & 'passive privacy'
- Why engineering incentives to collect more data to train AI models, especially in big tech, poses challenges to data minimization
- The importance of aligning privacy agendas with business priorities
- Why what companies infer about people can be a privacy violation; what engineers should know about 'input privacy' when training AI models; and, how that effects the output of inferred data
- Automated decision making: when it's necessary to have a 'human in the loop'
- Approaches for mitigating 'AI ethics fatigue'
- The need to backup a company's stated 'values' with actions; and why there should always be 3 - 7 guardrails put in place for each stated value
- The differences between 'Responsible AI' & 'Ethical AI,' and why companies seem reluctant to talk about ethics
- Reid's article, "Generative AI-xiety," & the 4 main risks related to generative AI
- Reid's advice for technical staff building products & services that leverage LLM's
Resources Mentioned:
- Read the book, "Ethical Machines"
- Reid's podcast, Ethical Machines
Guest Info:
- Follow Reid on LinkedIn
Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.
Shifting Privacy Left Media
Where privacy engineers gather, share, & learn
Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.
Copyright © 2022 - 2024 Principled LLC. All rights reserved.