Show simple item record

dc.rights.licenseCC-BY-NC-ND
dc.contributor.advisorKarnstedt-Hulpus, I.R.
dc.contributor.authorDughera, Luca
dc.date.accessioned2025-08-29T00:03:17Z
dc.date.available2025-08-29T00:03:17Z
dc.date.issued2025
dc.identifier.urihttps://studenttheses.uu.nl/handle/20.500.12932/50135
dc.description.abstractGraph Neural Networks (GNNs) have shown remarkable success in modeling relational data across various domains. However, training GNNs from scratch for each new task or dataset remains computationally expensive and often requires large amounts of labeled data, which is often scarce. This thesis explores strategies for pre-training GNNs with a focus on how to use common topological features in pre-training for enhancing transferability and how to leverage new unseen features in the downstream task to improve a model’s performance. The central hypothesis is that common and easily obtainable topological features such as degree, PageRank, eigenvector centrality, and clustering coefficients can be leveraged to build generalizable latent representations. Those latent representations can then be used with new features obtained in the downstream task to improve a model’s performance. We investigate methods for encoding these common features during pre-training and how to combine them with downstream features, aiming to improve performance in a downstream task in a domain where data is limited. The work proposes two frameworks for topologically based pre-training and evaluates the effectiveness of the pre-training during the downstream task. Our findings demonstrate that using topological graph features in the pre-training process increases a model’s performance on the downstream task. Moreover, during our experiments, we found that adding topological features to a model greatly increases its performance.
dc.description.sponsorshipUtrecht University
dc.language.isoEN
dc.subjectThis thesis explores using topological features (e.g., degree, PageRank) for pre-training Graph Neural Networks (GNNs) to improve transferability and performance on new tasks, especially with limited data. Two pre-training frameworks are proposed, showing that incorporating these features significantly enhances model performance on downstream tasks.
dc.titleLeveraging the Transferability Of Structural Graph Features for GNN Pre-training
dc.type.contentMaster Thesis
dc.rights.accessrightsOpen Access
dc.subject.keywordsGNN; pre-training; graph; graphs; Graph Neural Network; topology; topological features; AI; Artificial Intelligence; pre-training; topological features; degree; PageRank; centrality; clustering coefficients; latent representations; transferability; downstream tasks; limited labeled data; model performance; graph-based learning; feature encoding; domain adaptation
dc.subject.courseuuArtificial Intelligence
dc.thesis.id53205


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record