Feedback Loops in AI-Based Decision-Aid Systems
Summary
Predictive policing is known to carry the risk of feedback loops that can amplify existing biases. While these loops have received considerable attention in academic research, they remain under-explored in other policing applications. To contribute to this area, we investigate how the intuition of a feedback loop can be operationalised to a general setting where AI-based decision aid is applied, and how feedback loops can arise for policing systems specifically. We provide a model of the elements contained in a feedback loop and apply this to different AI models to show how their deployment can result in a bias-amplifying effect. Additionally, we provide a formal framework to show how feedback loops can appear in theory, as well as a software framework designed to simulate feedback loops within an example use case. In both frameworks, the AI model’s learning capacity is modelled by injecting errors that can be corrected throughout iterations. The simulations are performed for different scenarios representing known causes of biases in AI models, and their results show
how feedback loops amplify these biases. Overall, the simulations demonstrate that a feedback loop results in better model outputs for the privileged group, indicating that AI-based decision-aid systems should be carefully researched before deploying them in a police context in order to avoid unfair and unlawful behaviour.