Show simple item record

dc.rights.licenseCC-BY-NC-ND
dc.contributor.advisorHopster, Jeroen
dc.contributor.authorHeo, Minwhi
dc.date.accessioned2025-03-03T00:02:07Z
dc.date.available2025-03-03T00:02:07Z
dc.date.issued2025
dc.identifier.urihttps://studenttheses.uu.nl/handle/20.500.12932/48598
dc.description.abstractThe technological innovation of Artificial Intelligence-Driven Clinical Decision Support Systems (AI-CDSSs) has recently been introduced in clinical settings and its integration is now taking off. Advocates of AI-CDSSs expect enhanced diagnostic accuracy, personalized treatment plans, and improved patient outcomes overall — but significant challenges have also emerged, particularly regarding the attribution of responsibility. As AI systems take on a more prominent role in patient care, the question arises who is held accountable — not just legally liable but morally responsible — when an AI-CDSS contributes to serious medical errors. The integration of AI-CDSSs has substantial implications on the concept and practice of moral responsibility in clinical decision-making. This thesis project investigates these implications, analyses the emergent gaps in responsibility, and ultimately proposes that the problem is fundamentally conceptual and should be addressed through the method of conceptual engineering. The thesis is organized into three parts. Part I lays the theoretical foundation by examining how AI-CDSSs challenge our traditional understanding of moral responsibility, focusing on the knowledge and control conditions essential for ethical clinical practice. Part II explores the problem of responsibility gaps, where AI complicates the traditional attribution of responsibility in clinical settings, revealing that these gaps are deeply rooted in a misalignment between the desired function and actual function of responsibility. Part III introduces conceptual engineering as a method to address these gaps, proposing a refined framework for shared responsibility that better accommodates the complexities introduced by AI-CDSSs. By conceptually engineering responsibility, the thesis offers a pathway towards ensuring that the responsibility framework remains robust and adaptable in the face of rapidly advancing healthcare technologies. The findings of this thesis highlight the importance of evolving our ethical frameworks alongside AI technology to maintain accountability and justice in healthcare.
dc.description.sponsorshipUtrecht University
dc.language.isoEN
dc.subjectThis thesis explores the ethical implications of integrating AI-driven Clinical Decision Support Systems (AI-CDSSs) in healthcare. It focuses on the challenges of attributing moral responsibility when AI contributes to medical errors. The research examines how AI-CDSSs disrupt traditional notions of responsibility, investigates resulting responsibility gaps, and ultimately proposes that the problem is fundamentally conceptual and should be addressed through the method of conceptual engineering.
dc.titleReconceptualizing Responsibility in AI-Integrated Clinical Decision-Making Practices
dc.type.contentMaster Thesis
dc.rights.accessrightsOpen Access
dc.subject.keywordsArtificial intelligence, Clinical Decision Making, Moral Responsibiltiy, Responsibility Gaps, Conceptual Engineering
dc.subject.courseuuPhilosophy
dc.thesis.id38399


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record