View Item 
        •   Utrecht University Student Theses Repository Home
        • UU Theses Repository
        • Theses
        • View Item
        •   Utrecht University Student Theses Repository Home
        • UU Theses Repository
        • Theses
        • View Item
        JavaScript is disabled for your browser. Some features of this site may not work without it.

        Browse

        All of UU Student Theses RepositoryBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

        New Mechanism for complexity: How to enable understanding of emergent phenomena through the lens of Machine-Learning

        Thumbnail
        View/Open
        Master Thesis Felix Möser publication.pdf (7.472Mb)
        Publication date
        2024
        Author
        Möser, Felix
        Metadata
        Show full item record
        Summary
        In this thesis, I deal with a topic in epistemology of Machine Learning (ML). With an outstanding predictive accuracy and its ability to handle large amounts of data, ML is increasingly applied to complex systems science. However, ML models are often opaque and sometimes described as ”ruthless correlation extractors”, which makes them ineffective for understanding on a process-level. I seek to improve upon the concept called ”linkuncertainty”, introduced by Emily Sullivan, who addressed the question of how we could gain understanding through ML. In her drawn picture, mechanistic knowledge is just a passive precondition for an abstract level of understanding that is not further specified. Instead, I focus on mechanisms as a desired target of understanding, while grounding my analytical terminology within the recent movement of ”New Mechanism”. On the backdrop of a symbiotic (statistical/mechanistic) modelling framework, I first use case studies that apply ML in the field of climate science, to further centre my ideas around a ML model, called AgentNet, which deals with agent-based complex systems in a physically transparent way. Based on my analysis, I introduce a novel concept that I labelled ”Correspondence Principle for Mechanistic Interpretability”, or short "CPMint". It features a threefold correspondence-scheme between a ML model and the target system - First, on the ontological, second on the functional, and third on the predictive, phenomenological level, thus serving as a recipe to establish ”mechanistic interpretability”. In contrast to Sullivan’s ”link uncertainty”, CPMint capitalises on introducing physical transparency into the ML model, which makes it a guide to setting up ML models that aim at contributing to procedural knowledge within complex systems.
        URI
        https://studenttheses.uu.nl/handle/20.500.12932/47103
        Collections
        • Theses
        Utrecht university logo