
The Coming AI Hackers
Azeem Azhar's Exponential View
The Costs of Not Being Explainable
We've talked a lot about explainability and a i systems on previous episodes of this podcast. And it strikes me that the costs of explainability are tangible ther measurable. But the costs of not being explainable are somehow uncertain. Their sort of optionalities in the and distant future that is like an externality for a future generation to deal with. So it feels that we have a kind of, like a human incentive alignment problem here.
00:00
Transcript
Play full episode
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.