Show simple item record

dc.rights.licenseCC-BY-NC-ND
dc.contributor.advisorTerburg, David
dc.contributor.authorHennekens, Milou
dc.date.accessioned2022-07-26T00:00:43Z
dc.date.available2022-07-26T00:00:43Z
dc.date.issued2022
dc.identifier.urihttps://studenttheses.uu.nl/handle/20.500.12932/41924
dc.description.abstractRobots are increasingly implemented in safety-critical environments and function as intelligent and autonomous agents that cooperate with humans in alleged human-robot teams (HRTs). It is expected that robots need to increasingly make trade-off decisions, sometimes trading an individual militant’s safety for a mission’s objective. This can have a negative impact on human trust towards a robot team member. This study tested different moral intentions behind harmful decision-making. Participants formed a team with a virtual partner, either human or robot, to finish two missions. Halfway in the missions, a harmful decision was made whereafter two declarations of the partner revealed different intentions for this decision, both aimed at restoring trust. The declarations exposed that the harmful decision was either based on utilitarianist or deontologist considerations and revealed the partner’s lack of respectively benevolence and competence. Participant’s trust was measured prior to violation, after violation and after the reveal of the different intentions. It was found that there was no difference in trust development between partner types. Participant’s competence-based trust dropped the most after violation. Trust after a competence-based trust violation was found to be more easily restored than after a benevolence-based trust violation. This suggests that in all teams, benevolence-based trust is the least easily restored after a benevolence-based trust violation. It is recommended to be careful in being transparent about utilitarianist decision-making. Suggestions for future research are focused on the examination of the best trust repair technique, possibly through investigating the alignment of human moral expectations and their robot partner’s decision-making. An additional note that contradicts previous studies is that – although errorless design is crucial in safety-critical industries – humans can forgive their robot team member after they make a mistake.
dc.description.sponsorshipUtrecht University
dc.language.isoEN
dc.subjectThe topic of this thesis is trust in human-robot teams in the military. Different intentions behind a robot team mate's harmful decision were tested on their influence on human trust towards them. This study aimed at revealing what moral behaviour is expected from robots by their human team members.This could provide implications for future human-robot collaboration and robot design.
dc.titleTrust and moral trade-offs in human-robot teams (HRTs): A comparative study of HRTs and human teams in a virtual military operation
dc.type.contentMaster Thesis
dc.rights.accessrightsOpen Access
dc.subject.keywordsHuman-robot teams; teamwork; trust; utilitarianism; deontologism; decision-making
dc.subject.courseuuApplied Cognitive Psychology
dc.thesis.id6695


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record