Guided by AI: Examining Trust, Identity, and Decision-Making in Human-AI Collaboration
Summary
The increasing integration of Artificial Intelligence (AI) into collaborative, high-stakes
environments necessitates a deeper understanding of the factors that govern human-AI trust.
This study investigates how the anthropomorphic characteristics of an AI agent—specifically
its gender representation (male/female), attire (professional/casual), and guidance truthfulness
(truthful/untruthful)—individually and interactively influence user trust and compliance. A
mixed-methods approach was employed, using a 2x2x2 between-subjects experiment where
32 participants interacted with an AI co-driver in a custom-developed rally racing video game.
Data was collected through pre- and post-interaction questionnaires, behavioral analysis of
recorded gameplay, and qualitative responses. A three-way ANOVA revealed that guidance
truthfulness was the most significant predictor of trust, overwhelmingly overriding visual cues.
While avatar attire was close to reaching a significant effect on trust, avatar gender was far
behind, nonetheless neither of these visual heurtical variables produced a statistically
significant main effect. Qualitative analysis confirmed that while visual cues like professional
attire and gender stereotypes shaped initial expectations, these perceptions were quickly
supplanted by the AI's functional performance. These findings underscore that while aesthetic
design choices can prime user perceptions, functional reliability is the paramount factor in
establishing and maintaining trust in human-AI collaboration.