Philosophical Disquisitions cover image

99 - Trusting Untrustworthy Machines and Other Psychological Quirks

Philosophical Disquisitions

NOTE

Trustworthy AI Systems and Comparing Human and Machine Decision-Making

Creating trustworthy AI systems at a policy level may require setting normatively appropriate goals, but designing these systems cannot rely solely on subjective judgments. The classical tools for making AI systems more trustworthy, such as transparency and explainability, may not achieve the desired goal and may require rethinking. Comparing human and machine decision-making shows that in some scenarios, people express a preference for human decision-makers when given a deliberate choice, but when directly confronted with an algorithm, they may perceive the algorithm as better or more trustworthy, indicating a difference in reactions based on the immediacy of the situation.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner