首页|高性能异构加速器MiniGo算子优化方法

高性能异构加速器MiniGo算子优化方法

Optimizing operator computation of MiniGo on high-performance heterogeneous accelerator

扫码查看
根据高性能异构加速器的特性和MiniGo的训练模式提出了一种高效的并行计算方法.对片上计算资源进行合理规划,实现异构设备之间的流水并行优化;根据异构设备间存在共享存储段设计了共享内存编码模式,减少数据传输开销;根据数字信号处理簇内具有多计算资源的特点结合算子计算-访存特性设计了不同的算子并行计算优化策略.同时,面向TensorFlow实现了一个易于使用的高性能计算库.实验结果显示,该方法实现了典型算子的多核并行计算.相对于单核,卷积算子加速比为24.69.相较于裁剪版8 核FT2000 +CPU,该方法训练和自博弈执行速度加速比分别为3.83 和1.5.
An efficient parallel computing method based on the characteristics of the high-performance heterogeneous accelerator and the training mode of MiniGo was proposed.The on-chip computing resources were reasonably planned to achieve pipelining parallel optimization between heterogeneous devices.The shared memory programming was designed according to the existence of shared storage segments between heterogeneous devices to reduce data transmission costs.According to the characteristics of multiple computing resources in a digital signal processing cluster,combined with the computing-memory access feature of the operators,different optimization strategies were designed.At the same time,this method provides an easy-use high-performance operator library for TensorFlow.The experimental results show that this method realizes the multi-core parallel computing of operators.The speedup of convolution was 24.69 compared with that was achieved on a single core.Compared with the cropped version of the 8-core FT2000 + CPU,the speedup of training and self-play execution on this method were 3.83 and 1.5,respectively.

heterogeneous computingoperator optimizationconvolutional neural networksreinforcement learning

乔鹏、贺周雨、李荣春、姜晶菲

展开 >

国防科技大学 计算机学院,湖南 长沙 410073

国防科技大学 并行与分布计算全国重点实验室,湖南 长沙 410073

异构计算 算子优化 卷积神经网络 强化学习

国家重点实验室稳定支持项目

WDZC20205500104

2024

国防科技大学学报
国防科学技术大学

国防科技大学学报

CSTPCD北大核心
影响因子:0.517
ISSN:1001-2486
年,卷(期):2024.46(1)
  • 22