Show simple item record

dc.rights.licenseCC-BY-NC-ND
dc.contributor.advisorKrempl, G.M.
dc.contributor.authorMakowski, Maciej
dc.date.accessioned2024-11-01T01:02:23Z
dc.date.available2024-11-01T01:02:23Z
dc.date.issued2024
dc.identifier.urihttps://studenttheses.uu.nl/handle/20.500.12932/48070
dc.description.abstractMachine learning models have become widely used in recent years. As models are deployed and interact with real-world environments, they are susceptible to performance deterioration due to various factors such as data drift and model- induced changes in the surrounding environment. Addressing these challenges requires innovative approaches to maintain model effectiveness over time. This research focuses on devising strategies to mitigate the effects of perfor- mative drift by exploring feature transformation techniques and robust classi- fier training methods. Drawing inspiration from transfer learning concepts, the study aims to find feature representations resilient to drift or capable of reversing its effects. Additionally, it investigates the feasibility of training drift-resistant classifiers in transformed feature spaces. The research questions investigate the availability of performative data genera- tors, methods for computing feature transformations, and the impact of these transformations on data distributions. Furthermore, the study examines the possibility of training robust classifiers independent of the strength of perfor- mative effects and explores potential modifications to improve the effectiveness of the proposed methods. The main innovation introduced in this paper is the design of an architecture capable of providing drift-resistant classification and mapping of points back to the starting distribution. The devised model is a synthesis of a domain adversarial neural network with a generative adversarial neural network. The main experimental method used by this thesis is simulation, combining performative data generators available in the literature, existing transfer learn- ing and newly created architecture. Finally, a series of experiments has been performed and it has been proven that under certain conditions it is possible to train a stable classifier. Alongside that classifier, a generator network is trained. That network with some approximation can reproduce the original form of the dataset, which has been influenced by performative drift.
dc.description.sponsorshipUtrecht University
dc.language.isoEN
dc.subjectThe goal of the thesis is to design a solution for mitigating the performative drift. First, a review of performative data generators is performed. Subsequently, transfer learning methods are reviewed considering their applicability in the area. Finally, an architecture for mitigating the drift is synthesized and evaluated in a series of experiments. The architecture combines elements from Domain Adversarial Neural Networks and Generative Adversarial Networks.
dc.titleFeature Importance Mapping in performative predictions
dc.type.contentMaster Thesis
dc.rights.accessrightsOpen Access
dc.subject.keywordsperformative drift; alternative to retraining; generative domain adversarial neural network; mapping function; data distribution;
dc.subject.courseuuBusiness Informatics
dc.thesis.id40716


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record