The Think Like A Vertex approach in a parallel graph neural network
MetadataShow full item record
Neural networks have been on the rise in the past years and are being applied in many different areas. Since in many applications the underlying data naturally has a graph-like structure, a type of network called graph neural network (GNN) has been proposed in 2009. This network type is able to capture dependencies within the graph by updating the state of each vertex based on the states of its neighbors, a process which has more recently been introduced as message passing. As graphs in practical applications are becoming considerably large, training and applying a GNN is computationally expensive and time-consuming and parallelization is therefore expected to be worthwhile. In the context of implementation, the Think Like A Vertex (TLAV) framework is a reasonable approach as it fits nicely with the BSP model for parallel programming. First introduced in 2010, the TLAV approach focuses on alternating local computations in a vertex and exchanging information with neighboring vertices. In this project, we combine TLAV with the BSP model in order to design a parallel algorithm implementing the message passing process of a GNN. We test this parallelization with an update function based on the Gated Recurrent Unit (GRU). Our experiments indicate that a reasonable speedup is obtained with respect to a sequential implementation, showing that the running time can be significantly reduced when applied to graphs of different sizes and with different properties.