Uncertainty Communication by AI Assistants: The Effects on User Trust
Summary
As artificial intelligence (AI) rapidly spreads across multiple domains and becomes increasingly integrated into everyday life, user trust is vital to consider. Inappropriate user trust has resulted in fatal accidents and significantly stunts the opportunities that AI can offer. Given the uncertainty involved in AI inputs, processing and outputs, this study investigated the effects of communicating system uncertainty on users’ trust in AI assistants. Trust development was repeatedly assessed whilst 64 participants completed an online search task that was guided by an AI drone. Drones either communicated uncertainty or not and deployed a trust repair strategy or not following a 2x2 mixed factorial design. The research also assessed if uncertainty communication enhanced the trust repair strategy and if it improved users’ perception of and overall interaction with the AI drones. Results show that uncertainty communication significantly dampened the negative effects of an AI error by increasing users’ situational awareness, understanding of the system, and sensitivity to AI fallibility. Participants preferred drones that communicated uncertainty as these were perceived to be more trustworthy and valuable. The trust repair strategy significantly repaired violated user trust, yet this effect was not enhanced by uncertainty communication. This research concludes that successful AI systems must: adapt with the fluidity of user trust, provide system transparency, maintain user agency, perform well, recognize past system performance, and empathetically acknowledge the user’s emotional state throughout an interaction.