Show simple item record

dc.rights.licenseCC-BY-NC-ND
dc.contributor.advisorBisseling, Rob
dc.contributor.authorKlein, Katharina
dc.date.accessioned2022-05-24T00:00:38Z
dc.date.available2022-05-24T00:00:38Z
dc.date.issued2022
dc.identifier.urihttps://studenttheses.uu.nl/handle/20.500.12932/41581
dc.description.abstractNeural networks have been on the rise in the past years and are being applied in many different areas. Since in many applications the underlying data naturally has a graph-like structure, a type of network called graph neural network (GNN) has been proposed in 2009. This network type is able to capture dependencies within the graph by updating the state of each vertex based on the states of its neighbors, a process which has more recently been introduced as message passing. As graphs in practical applications are becoming considerably large, training and applying a GNN is computationally expensive and time-consuming and parallelization is therefore expected to be worthwhile. In the context of implementation, the Think Like A Vertex (TLAV) framework is a reasonable approach as it fits nicely with the BSP model for parallel programming. First introduced in 2010, the TLAV approach focuses on alternating local computations in a vertex and exchanging information with neighboring vertices. In this project, we combine TLAV with the BSP model in order to design a parallel algorithm implementing the message passing process of a GNN. We test this parallelization with an update function based on the Gated Recurrent Unit (GRU). Our experiments indicate that a reasonable speedup is obtained with respect to a sequential implementation, showing that the running time can be significantly reduced when applied to graphs of different sizes and with different properties.
dc.description.sponsorshipUtrecht University
dc.language.isoEN
dc.subjectIn this project, we combine Think Like a Vertex with the BSP model in order to design a parallel algorithm implementing the message passing process of a GNN. We test this parallelization with an update function based on the Gated Recurrent Unit (GRU). Our experiments indicate that a reasonable speedup is obtained with respect to a sequential implementation, showing that the running time can be significantly reduced when applied to graphs of different sizes and with different properties.
dc.titleThe Think Like A Vertex approach in a parallel graph neural network
dc.type.contentMaster Thesis
dc.rights.accessrightsOpen Access
dc.subject.courseuuMathematical Sciences
dc.thesis.id4054


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record