Show simple item record

dc.rights.licenseCC-BY-NC-ND
dc.contributor.advisorDijkstra, H.A.
dc.contributor.authorMöser, Felix
dc.date.accessioned2024-08-05T23:02:29Z
dc.date.available2024-08-05T23:02:29Z
dc.date.issued2024
dc.identifier.urihttps://studenttheses.uu.nl/handle/20.500.12932/47103
dc.description.abstractIn this thesis, I deal with a topic in epistemology of Machine Learning (ML). With an outstanding predictive accuracy and its ability to handle large amounts of data, ML is increasingly applied to complex systems science. However, ML models are often opaque and sometimes described as ”ruthless correlation extractors”, which makes them ineffective for understanding on a process-level. I seek to improve upon the concept called ”linkuncertainty”, introduced by Emily Sullivan, who addressed the question of how we could gain understanding through ML. In her drawn picture, mechanistic knowledge is just a passive precondition for an abstract level of understanding that is not further specified. Instead, I focus on mechanisms as a desired target of understanding, while grounding my analytical terminology within the recent movement of ”New Mechanism”. On the backdrop of a symbiotic (statistical/mechanistic) modelling framework, I first use case studies that apply ML in the field of climate science, to further centre my ideas around a ML model, called AgentNet, which deals with agent-based complex systems in a physically transparent way. Based on my analysis, I introduce a novel concept that I labelled ”Correspondence Principle for Mechanistic Interpretability”, or short "CPMint". It features a threefold correspondence-scheme between a ML model and the target system - First, on the ontological, second on the functional, and third on the predictive, phenomenological level, thus serving as a recipe to establish ”mechanistic interpretability”. In contrast to Sullivan’s ”link uncertainty”, CPMint capitalises on introducing physical transparency into the ML model, which makes it a guide to setting up ML models that aim at contributing to procedural knowledge within complex systems.
dc.description.sponsorshipUtrecht University
dc.language.isoEN
dc.subjectI am seeking to qualitatively formulate conditions about how to link a Machine-Learning algorithm to a complex (agent-based) target system, such that the ML model provides mechanistic insight into emergent phenomena within that system.
dc.titleNew Mechanism for complexity: How to enable understanding of emergent phenomena through the lens of Machine-Learning
dc.type.contentMaster Thesis
dc.rights.accessrightsOpen Access
dc.subject.keywordsMechanism; Understanding; Machine-learning; Complexity;
dc.subject.courseuuHistory and Philosophy of Science
dc.thesis.id35895


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record