dc.rights.license | CC-BY-NC-ND | |
dc.contributor.advisor | Graaf, M.M.A. de | |
dc.contributor.author | Melot Chesnel, Joséphine | |
dc.date.accessioned | 2024-07-24T23:03:05Z | |
dc.date.available | 2024-07-24T23:03:05Z | |
dc.date.issued | 2024 | |
dc.identifier.uri | https://studenttheses.uu.nl/handle/20.500.12932/46858 | |
dc.description.abstract | Human-robot collaboration is getting more and more widely used. Robots, just like humans, make errors, which break the trust necessary for a successful collaboration. It is thus important to implement strategies to repair trust. In the present lab study, three strategies are studied: apologies, denial, compensation. The participants play collaborative games with a Pepper robot during which it makes one of two types of failures: competence-based (it fails at playing well) or integrity-based (it cheats). Another goal of this experiment was to examine whether dispositional trust towards robots impacted the best strategy for each individual, which would explain the vast diversity of results in studies of this field.
Confirming previous literature, moral trust decreased more in the integrity failure than in the performance failure, and performance trust decreased more in the performance failure than in the integrity failure. Participants experimented more discomfort when exposed to the denial condition compared to the apology and the compensation conditions (through both types of failure). Additionally, while most scales were not influenced by dispositional trust levels, data showed that it does impact the best strategy to choose in order to increase willingness to collaborate with the robot again (e.g. participants with very high dispositional trust towards robots were far more willing to collaborate again when in the apology condition). Those results indicate the need to study further into individual differences to better understand how they impact trust towards robots and the effectiveness of repair trust strategies. | |
dc.description.sponsorship | Utrecht University | |
dc.language.iso | EN | |
dc.subject | Human-robot collaboration is getting more and more widely used, failures happend, and repair strategies must be implemented to repair trust and the following collaboration. This in-lab study examine the effects of three strategies (apologies, compensation, denial) on two types of failures (integrity, performance), as well as the impact of dispositional trust level on trust repair. | |
dc.title | Would You Trust Me Now? A Study on Repair Trust Strategies in Human-Robot Collaboration | |
dc.type.content | Master Thesis | |
dc.rights.accessrights | Open Access | |
dc.subject.keywords | human-robot collaboration, human-robot interaction, trust repair, dispositional trust, communicative strategies | |
dc.subject.courseuu | Artificial Intelligence | |
dc.thesis.id | 34864 | |