View Item 
        •   Utrecht University Student Theses Repository Home
        • UU Theses Repository
        • Theses
        • View Item
        •   Utrecht University Student Theses Repository Home
        • UU Theses Repository
        • Theses
        • View Item
        JavaScript is disabled for your browser. Some features of this site may not work without it.

        Browse

        All of UU Student Theses RepositoryBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

        Trust and moral trade-offs in human-robot teams (HRTs): A comparative study of HRTs and human teams in a virtual military operation

        Thumbnail
        View/Open
        ACP Thesis MH 6019722_NoComments.docx (935.8Kb)
        Publication date
        2022
        Author
        Hennekens, Milou
        Metadata
        Show full item record
        Summary
        Robots are increasingly implemented in safety-critical environments and function as intelligent and autonomous agents that cooperate with humans in alleged human-robot teams (HRTs). It is expected that robots need to increasingly make trade-off decisions, sometimes trading an individual militant’s safety for a mission’s objective. This can have a negative impact on human trust towards a robot team member. This study tested different moral intentions behind harmful decision-making. Participants formed a team with a virtual partner, either human or robot, to finish two missions. Halfway in the missions, a harmful decision was made whereafter two declarations of the partner revealed different intentions for this decision, both aimed at restoring trust. The declarations exposed that the harmful decision was either based on utilitarianist or deontologist considerations and revealed the partner’s lack of respectively benevolence and competence. Participant’s trust was measured prior to violation, after violation and after the reveal of the different intentions. It was found that there was no difference in trust development between partner types. Participant’s competence-based trust dropped the most after violation. Trust after a competence-based trust violation was found to be more easily restored than after a benevolence-based trust violation. This suggests that in all teams, benevolence-based trust is the least easily restored after a benevolence-based trust violation. It is recommended to be careful in being transparent about utilitarianist decision-making. Suggestions for future research are focused on the examination of the best trust repair technique, possibly through investigating the alignment of human moral expectations and their robot partner’s decision-making. An additional note that contradicts previous studies is that – although errorless design is crucial in safety-critical industries – humans can forgive their robot team member after they make a mistake.
        URI
        https://studenttheses.uu.nl/handle/20.500.12932/41924
        Collections
        • Theses

        Related items

        Showing items related by title, author, creator and subject.

        • Human engagement state recognition for autonomous functioning of a robot in human-robot conversation 

          Griffioen, K. (2020)
          The goal of this thesis was to develop a model to classify the different states of engagement. We took on the definition of engagement as the process by which interactors start, maintain and end their perceived connection ...
        • Objection! The staged relations between human beings and agentive objects as less anthropocentric alternative for the design of social robots and human-robot interaction. 

          Vermeulen, D.H.A. (2017)
          This thesis brings together two fields that at first sight seem to have little in common: theatre and social robotics. It argues in what ways dramaturgical principles used in object theatre to transform lifeless things ...
        • The difference in functional connectivity during human-human interaction and human-robot interaction. 

          Hogenhuis, A.L.M.P. (2021)
          The developments in artificial intelligence are leading us towards a new scientific frontier of socially engaged robots. This poses new questions regarding the impact of unfamiliar agents that alter our social environment. ...
        Utrecht university logo