David Leslie's report for the Alan Turing Institute was a report on the overview of a lot of different concerns in AI ethics. We wanted to show how sustainability is relevant to them more broader and social and economic sense too. So again, related to this topic of actually building AI out in the world outside the lab. But yeah, it sounds like maybe they're still focused on this model view of this like academic lab study of one thing, not quite dealing with messiness of the real world so much yet.
In episode 57 of The Gradient Podcast, Andrey Kurenkov speaks to Blair Attard-Frost.
Note: this interview was recorded 8 months ago, and some aspects of Canada’s AI strategy have changed since then. It is still a good overview of AI governance and other topics, however.
Blair is a PhD Candidate at the University of Toronto’s Faculty of Information who researches the governance and management of artificial intelligence. More specifically, they are interested in the social construction of intelligence, unintelligence, and artificial intelligence, the relationship between organizational values and AI use, and the political economy, governance, and ethics of AI value chains. They integrate perspectives from service sciences, cognitive sciences, public policy, information management, and queer studies for their research.
Have suggestions for future podcast guests (or other feedback)? Let us know here!
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter or Mastodon
Outline:
* Intro
* Getting into AI research
* What is AI governance
* Canada’s AI strategy
* Other interests
Links:
* Once a promising leader, Canada’s artificial-intelligence strategy is now a fragmented laggard
* The Ethics of AI Business Practices: A Review of 47 Guidelines
Get full access to The Gradient at
thegradientpub.substack.com/subscribe