首页|A Caching-based Framework for Scalable Temporal Graph Neural Network Training

A Caching-based Framework for Scalable Temporal Graph Neural Network Training

扫码查看
Representation learning over dynamic graphs is critical for many real-world applications such as socialnetwork services and recommender systems. Temporal graph neural networks (T-GNNs) are powerful representationlearning methods and have demonstrated remarkable effectiveness on continuous-time dynamicgraphs. However, T-GNNs still suffer from high time complexity, which increases linearly with the numberof timestamps and grows exponentially with the model depth, making them not scalable to large dynamicgraphs. To address the limitations, we propose Orca, a novel framework that accelerates T-GNN training bycaching and reusing intermediate embeddings. We design an optimal caching policy, named MRD, for theuniform cache replacement problem, where embeddings at different intermediate layers have identical dimensionsand recomputation costs. MRD not only improves the efficiency of training T-GNNs by maximizing thenumber of cache hits but also reduces the approximation errors by avoiding keeping and reusing extremelystale embeddings. For the general cache replacement problem, where embeddings at different intermediatelayers can have different dimensions and recomputation costs, we solve this NP-hard problem by presentinga novel two-stage framework with approximation guarantees on the achieved benefit of caching. Furthermore,we have developed profound theoretical analyses of the approximation errors introduced by reusingintermediate embeddings, providing a thorough understanding of the impact of our caching and reuseschemes on model outputs. We also offer rigorous convergence guarantees for model training, adding to thereliability and validity of our Orca framework. Extensive experiments have validated that Orca can obtaintwo orders of magnitude speedup over state-of-the-art T-GNNs while achieving higher precision on variousdynamic graphs.

Temporal graph neural networkscache replacement

YIMING LI、YANYAN SHEN、LEI CHEN、MINGXUAN YUAN

展开 >

The Hong Kong University of Science and Technology, Hong Kong, China

Shanghai Jiao Tong University, Shanghai, China

TheHong Kong University of Science and Technology (Guangzhou), Guangzhou, China and TheHong Kong University of Science and Technology, Hong Kong, China

Huawei Technologies Noah’s Ark Lab Hong Kong, Hong Kong, China

展开 >

2025

ACM Transactions on Database Systems

ACM Transactions on Database Systems

ISSN:0362-5915
年,卷(期):2025.50(1)
  • 101