Automatic Pipeline Parallel Training Framework for General-purpose Computing Devices
Training large-scale neural networks usually exceeds the memory and computing capacity of a single computing node,which requires distributed training using multiple nodes.Existing distributed deep learning frameworks are mainly designed for specific hardware environments and cannot effectively adapt to various general-purpose computing devices.To support the effi-cient training of large-scale deep neural networks,this paper implements a general-purpose automatic pipeline parallel distributed training framework.This framework combines the model parallel strategy based on pipeline parallelism with the algorithm that automatically splits the neural network model,and realizes the automatic parallelization and training of large-scale neural network models and training data on general computer clusters,including the new generation of supercomputers in China,significantly re-ducing the memory and computing pressure of a single computing node.The framework does not require manual adjustment,and can automatically and efficiently deploy deep neural networks to multi-node distributed environments.It is not only suitable for supercomputers and other high-performance computer clusters,but also can be deployed to other general distributed computing environments,providing support for the automatic distributed training of large-scale neural networks.