Can Explainable AI Mitigate Decision-Making Errors Induced by Algorithms in Street-Level Police Work? An Experiment.
Summary
Machine learning algorithms are increasingly used in the street-level bureaucracy. Frontline decision-making at the same time demands individual and human judgment that cannot be fully automated. Algorithms are therefore used to inform, but not replace the street-level bureaucrat. Street-level decision-makers can however become subject to automation bias or confirmation bias when interpreting algorithmic information. This respectively means that decision-makers overly or selectively trust algorithmic advice. These biases can lead to new types of decision-making errors. Explainable Artificial Intelligence techniques, algorithmic systems that explain how advice is constructed, are seen as a critical step in preventing algorithm-induced decision-making errors. A pre-registered survey experiment was used to test these expectations in a mock algorithm, within a sample of street-level police officers (N = 124). The results of this experiment imply that (1) street-level bureaucrats are not prone to automation bias, rather (2) they are likely to be subject to confirmation bias. Additionally, this study finds that (3) the effects of explaining algorithmic advice might only be limited for professional decision-makers. These findings have important implications for how street-level decision-making processes can be enabled by algorithms.