99 - Trusting Untrustworthy Machines and Other Psychological Quirks
Nov 7, 2022
auto_awesome
Matthias Uhl, a professor of social and ethical implications of AI, discusses intriguing findings on human-AI interaction. He reveals that people outsource responsibility to machines, trust untrustworthy machines, and prefer human decision-makers over logical machines. This research has significant implications for AI ethics and policy.
People value human judgment and discretion over precise logic of machines in certain decision-making situations.
Individuals are willing to shift blame and avoid responsibility by hiding behind machines in decision-making contexts.
People's trust in algorithms may not align with their perception of trustworthiness, emphasizing the need for further investigation and understanding of human-machine interaction.
Deep dives
People prefer moral discretion to algorithms
In a study, participants were given a task where they had to distribute apples between themselves and another person. They were given the choice to work under a regime where either human experts, an algorithm, or an individual had discretion to distribute the apples. Participants showed a clear preference for the regime where an individual had discretionary power, indicating a desire for spontaneity and the ability to deviate from the rule. The study suggests that people value the element of human judgment and are willing to trust it more than algorithms in certain situations.
People can hide behind machines to avoid responsibility
Another study investigated whether individuals can shift blame or responsibility to machines. Participants were assigned a task where they could delegate their performance to a human agent or an artificial agent. The study found that individuals received significantly less blame when the artificial agent failed on their behalf compared to when they failed themselves. This suggests that people can effectively hide behind machines to avoid responsibility and blame, highlighting the need to consider responsibility gaps that may be created as machines become more prevalent in decision-making contexts.
Humans trust untrustworthy AI advisors for ethical decisions
A study explored whether people trust AI advisors in ethical decision-making. Participants were given a task involving a moral dilemma and received ethical recommendations from either a human or an algorithm. The study found that people followed the advice of both the human and algorithm advisors to a similar degree. Even when participants were explicitly informed that the algorithm's advice was based on the decisions of convicted criminals, they still trusted the algorithm's advice. The findings suggest that people's trust in algorithms may not necessarily be aligned with their perception of trustworthiness, highlighting the need for further investigation.
Attention and tension between human preference for discretion and trust in machines
The studies discussed revealed an interesting tension between human preference for discretion and their trust in machines. Participants showed a preference for human discretion in decision-making processes, valuing the element of spontaneity and the ability to deviate from rules. However, participants also exhibited trust in machines, relying on their advice even when the machines lacked transparency or were trained on advice from criminals. This inconsistency suggests that human interactions with machines are complex and influenced by various factors, including context, biases, and preconceptions.
Implications for designing trustworthy and reliable AI
The research highlights the need for a nuanced understanding of human-machine interaction and the design of trustworthy and reliable AI. While algorithms can be trusted and preferred in some contexts, it is crucial to consider responsibility gaps and potential biases that may arise. Transparency and explanation alone may not be sufficient to establish trust, and the complex dynamics between humans and machines require further exploration. Future research should aim to develop comprehensive theories of human-machine interaction and investigate the impact of evolving technologies on human decision-making and trust.
In this episode I chat to Matthias Uhl. Matthias is a professor of the social and ethical implications of AI at the Technische Hochschule Ingolstadt. Matthias is a behavioural scientist that has been doing a lot of work on human-AI/Robot interaction. He focuses, in particular, on applying some of the insights and methodologies of behavioural economics to these questions. We talk about three recent studies he and his collaborators have run revealing interesting quirks in how humans relate to AI decision-making systems. In particular, his findings suggesting that people do outsource responsibility to machines, are willing to trust untrustworthy machines and prefer the messy discretion of human decision-makers over the precise logic of machines. Matthias's research is fascinating and has some important implications for people working in AI ethics and policy.