Data Skeptic

Kyle Polich
undefined
Jul 11, 2020 • 27min

GANs Can Be Interpretable

Erik Härkönen joins us to discuss the paper GANSpace: Discovering Interpretable GAN Controls. During the interview, Kyle makes reference to this amazing interpretable GAN controls video and it’s accompanying codebase found here. Erik mentions the GANspace collab notebook which is a rapid way to try these ideas out for yourself.
undefined
Jul 6, 2020 • 29min

Sentiment Preserving Fake Reviews

David Ifeoluwa Adelani joins us to discuss Generating Sentiment-Preserving Fake Online Reviews Using Neural Language Models and Their Human- and Machine-based Detection.
undefined
Jun 26, 2020 • 32min

Interpretability Practitioners

Sungsoo Ray Hong joins us to discuss the paper Human Factors in Model Interpretability: Industry Practices, Challenges, and Needs.
undefined
Jun 19, 2020 • 48min

Facial Recognition Auditing

Deb Raji joins us to discuss her recent publication Saving Face: Investigating the Ethical Concerns of Facial Recognition Auditing.
undefined
Jun 12, 2020 • 38min

Robust Fit to Nature

Uri Hasson joins us this week to discuss the paper Robust-fit to Nature: An Evolutionary Perspective on Biological (and Artificial) Neural Networks.
undefined
Jun 5, 2020 • 32min

Black Boxes Are Not Required

Deep neural networks are undeniably effective. They rely on such a high number of parameters, that they are appropriately described as “black boxes”. While black boxes lack desirably properties like interpretability and explainability, in some cases, their accuracy makes them incredibly useful. But does achiving “usefulness” require a black box? Can we be sure an equally valid but simpler solution does not exist? Cynthia Rudin helps us answer that question. We discuss her recent paper with co-author Joanna Radin titled (spoiler warning)… Why Are We Using Black Box Models in AI When We Don’t Need To? A Lesson From An Explainable AI Competition
undefined
May 30, 2020 • 22min

Robustness to Unforeseen Adversarial Attacks

Daniel Kang joins us to discuss the paper Testing Robustness Against Unforeseen Adversaries.
undefined
May 22, 2020 • 25min

Estimating the Size of Language Acquisition

Frank Mollica joins us to discuss the paper Humans store about 1.5 megabytes of information during language acquisition
undefined
May 15, 2020 • 36min

Interpretable AI in Healthcare

Jayaraman Thiagarajan joins us to discuss the recent paper Calibrating Healthcare AI: Towards Reliable and Interpretable Deep Predictive Models.
undefined
May 8, 2020 • 35min

Understanding Neural Networks

What does it mean to understand a neural network? That’s the question posted on this arXiv paper. Kyle speaks with Tim Lillicrap about this and several other big questions.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app