Marco Serafini was awarded a grant from the National Science Foundation for the project “Transparently Scaling Graph Neural Network Training to Large-Scale Models and Graphs”.
Summary: Large-scale graphs with billions of edges are ubiquitous in many industry, science, and engineering fields such as recommendation systems, social graph analysis, knowledge bases, materials science, and biology. In particular, Graph Neural Networks (GNN), an emerging class of machine learning (ML) models, are increasingly adopted due to their superior performance in many tasks. Unfortunately, the progress towards training GNNs on large-scale real-world graphs is undermined by the lack of adequate systems support for ML practitioners. This project will develop fundamental research on algorithms, systems, and infrastructures to meet the pressing and growing need for GNN training systems that can scale to both large graph datasets and large expressive GNN models transparently to users. Supporting large-scale graphs and GNN models will unleash innovation in a wide range of domains by making it easier for ML practitioners to develop large and expressive models without having to work around the scalability limitations of current GNN training systems.