Efficient RNN inference engine on very long vector processor
扫码查看
点击上方二维码区域,可以放大扫码查看
原文链接
国家科技期刊平台
NETL
NSTL
维普
万方数据
模型深度的不断增加和处理序列长度的不一致对循环神经网络在不同处理器上的性能优化提出巨大挑战.针对自主研制的长向量处理器FT-M7032,实现了一个高效的循环神经网络加速引擎.该引擎采用行优先矩阵向量乘算法和数据感知的多核并行方式,提高矩阵向量乘的计算效率;采用两级内核融合优化方法降低临时数据传输的开销;采用手写汇编优化多种算子,进一步挖掘长向量处理器的性能潜力.实验表明,长向量处理器循环神经网络推理引擎可获得较高性能,相较于多核ARM CPU以及Intel Golden CPU,类循环神经网络模型长短记忆网络可获得最高62.68 倍和3.12 倍的性能加速.
With the increasing depth and the inconsistent length of processing sequences,the performance optimization of RNN(recurrent neural network)on different processors makes it difficult to researchers.An efficient RNN acceleration engine was implemented for the self-developed long vector processor FT-M7032.This engine proposed a row-first matrix vector multiplication algorithm and a data-aware multi-core parallel method to improve the computational efficiency of matrix vector multiplication.It proposed a two-level kernel fusion optimization method to reduce the overhead of temporary data transmission.Optimized handwritten assembly codes for multiple operators were integrated to further tap the performance potential of long vector processors.Experiments show that the RNN engine for long-vector processors is efficient,when compared with the multi-core ARM CPU and Intel Golden CPU,the RNN-like model long short term memory networks can achieve a performance acceleration of up to62.68 times and 3.12 times,respectively.
multicore DSPvery long vector processorrecurrent neural networksparallel optimization