Understanding Perception of Algorithmic Predictions
MetadataShow full item record
Decision making is often supported by forecasts of different sources including human experts and artificial intelligence. This paper examines perception of algorithmic and human-made forecasts and its potential influence on associated decision making in situations of uncertainty. Two groups of participants were given the same set of hypothetical choice problems with embedded in them probabilistic forecasts. One group was presented those forecasts as made by human experts and the second group was told that the forecasts were made by AI. Every choice problem proposed an option that participants must accept or reject. The objective of this study was to observe preferability of choice options in Human and AI conditions depending on the forecasts’ framing (positive or negative), confidence level (high, medium, low) and decision domain (serious or trivial). We found that in general AI-made forecasts receive less “yes” answers to the choice problems than human-made forecasts. Overall, the framing and confidence levels affected the probability of a “yes” response. However, only the framing showed different magnitude of the phenomena in AI and Human condition indicating an interaction between those two factors. No main effect was discovered for the decision domains of the questions. Additionally, a trust scale revealed higher trust levels towards a human expert when compared to trust in AI. These findings contribute to the psychology human-AI interaction and decision making under uncertainty and suggest that people see algorithmic predictions as lacking trustworthiness.