View Item 
        •   Utrecht University Student Theses Repository Home
        • UU Theses Repository
        • Theses
        • View Item
        •   Utrecht University Student Theses Repository Home
        • UU Theses Repository
        • Theses
        • View Item
        JavaScript is disabled for your browser. Some features of this site may not work without it.

        Browse

        All of UU Student Theses RepositoryBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

        Evaluating the epistemic condition of responsibility for AI

        Thumbnail
        View/Open
        Master_Thesis_Aniek_Brandt_6253458.pdf (659.7Kb)
        Publication date
        2024
        Author
        Brandt, Aniek
        Metadata
        Show full item record
        Summary
        This thesis investigates the possible existence of moral responsibility gaps caused by Big Data- powered, bodiless AI systems. It argues that the moral implications of such AI substantially differ from those of the predominantly discussed embodied AI, because it can be used unwittingly and has primarily long-term consequences. Therefore, this thesis evaluates the fulfilment of the epistemic condition for moral responsibility and explores both the immediate and long-term consequences of bodiless AI usage. Firstly, a consequentialist theoretical framework is established by adopting one backward-looking and one forward-looking sense of responsibility to evaluate (blameworthiness and the obligation to see to it that some consequence obtains, respectively) and by providing an explicit definition of the epistemic condition. Secondly, these concepts are applied to the specific case of bodiless AI usage, where the normalisation of algorithmic outsourcing (with a loss of autonomy as a result) is evaluated as the long-term consequence of bodiless AI usage. It is argued that individual users often fail to meet the epistemic condition, resulting in both immediate and long-term responsibility gaps. However, this thesis proposes that responsibility allocation for long-term consequences can best be evaluated through a collective understanding of moral responsibility. By considering the normalisation of algorithmic outsourcing as a collective action problem, it argues for the existence of collective epistemic duties to generate knowledge, suggesting that collective understanding of moral responsibility is crucial for mitigating the long- term consequences of AI usage, at least conceptually. How this should practically be approached, however, was left to future research, with the suggestion that it might best be done from a deontic perspective.
        URI
        https://studenttheses.uu.nl/handle/20.500.12932/48023
        Collections
        • Theses
        Utrecht university logo