

Edward Santow on Predictive Policing, Racial Bias and AI’s Impact on Human Rights | RegulatingAI Podcast
In this episode of the RegulatingAI podcast, host Sanjay Puri speaks with Professor Edward Santow, former Australian Human Rights Commissioner and co-director of the Human Technology Institute. Together, they explore how algorithms intended to support justice can actually perpetuate discrimination.
Key topics include:
- How Australia’s largest police force used AI to profile Indigenous youth
- The consequences of using historical data without correcting historical bias
- Why system-level harms from AI demand policy-level responses
- What governments must do to protect rights while embracing innovation
A sobering and essential conversation about AI, justice, and what ethical governance looks like in practice.
Resources Mentioned:
https://www.linkedin.com/in/esantow/
⏱️ Timestamps:
0:00 Podcast Highlights
1:34 Ed’s background and journey into technology governance
2:12 The 'aha' moment: an algorithm targeting young people based on race
5:36 Finding a balance between AI's dystopian problems and positive use cases
9:07 The global fear of missing out (FOMO) and the trade-off with fundamental rights
11:12 Why innovation and regulation are not a trade-off
12:22 Comparing the AI regulatory approaches of the EU, US, and China
13:57 Australia's practical, non-ideological approach to AI
15:45 How Australia is building its niche on liberal democratic values
19:22 The shift from "fluffy principles" to practical AI safety standards
22:37 The three most common issues for corporate leaders in AI governance
23:08 The problem with the "AI guru" model of governance
25:08 The "dirty secret" of AI and the importance of engaging workers
35:24 The impact of AI on jobs and the workplace
40:28 The Asia-Pacific region's role in AI governance
44:07 Preserving indigenous cultures and languages in AI training data
47:14 The concentration of power in a handful of AI companies
50:09 Facial recognition: good uses vs. bad uses
53:57 Lightning round of questions
55:22 Conclusion and farewell