首页|面向稀疏卷积神经网络的CGRA加速器研究

面向稀疏卷积神经网络的CGRA加速器研究

扫码查看
本文针对规模日益增长和演变迅速的稀疏卷积神经网络(CNN)应用,提出一款高能效且灵活的加速结构DyCNN来提升其性能和能效。DyCNN基于兼具灵活性和高能效的粗粒度可重构架构(CGRA)设计,可以利用其指令的高并行性来高效支持CNN的操作。DyCNN使用基于数据感知的指令动态过滤机制来滤除各计算单元中由于稀疏CNN中权值静态稀疏性和激活值动态稀疏性产生的大量无效计算和访存指令,使它们能像执行稠密网络一样高效复用一组指令。此外DyCNN利用基于负载感知的动静结合负载调度策略解决了稀疏导致的负载不均衡问题。实验结果表明,DyCNN运行稀疏CNN与运行密集CNN相比实现了平均1。69倍性能提升和3。04倍能效提升,比先进的GPU(cuS-PARSE)和Cambricon-X上的解决方案分别实现了 2。78倍、1。48倍性能提升和35。62倍、1。17倍能效提升。
The research of CGRA accelerator for sparse convolutional neural networks
A novel accelerator named DyCNN is proposed for sparse convolutional neural network(CNN)that has the in-creasing scale and rapid evolution.DyCNN is an energy-efficient and flexible accelerator,which is based on coarse grained reconfigurable architecture(CGRA).DyCNN utilizes a data-aware dynamic filtering mechanism to elimi-natea large number of invalid calculations and memory accesses caused by the static sparsity of filters and dynamic sparsity of activation values in sparse convolutional neural network and increase the on-chip reuse of instructions among processing units.Meanwhile,a dynamic work-stealing strategy combined with a static work distribution scheme is proposed to alleviate the load imbalance caused by the sparsity of filter and activation values.Overall,DyCNN achieves a 1.69 × speedup and 3.04 × energy savings on average when running sparse CNN compared with running dense CNN.DyCNN achieves 2.78 ×,1.48 × speedup and 35.62 × and 1.17 × energy savings com-pared with the state-of-the-art GPU(cuSPARSE)and Cambricon-X solutions respectively.

sparse convolutional neural network(CNN)dedicated acceleratorcoarse-grained reconfigu-rable architecture(CGRA)dynamic instruction filteringdynamic workload balance

谭龙、严明玉、吴欣欣、李文明、吴海彬、范东睿

展开 >

中国科学院计算技术研究所处理器国家重点实验室 北京 100190

中国科学院大学 北京 100049

稀疏卷积神经网络(CNN) 专用加速结构 粗粒度可重构架构(CGRA) 动态指令过滤 动态负载调度

国家自然科学基金中国科学院青年基础研究中国科学院青年创新促进会项目资助

62202451YSBR-029

2024

高技术通讯
中国科学技术信息研究所

高技术通讯

CSTPCD北大核心
影响因子:0.19
ISSN:1002-0470
年,卷(期):2024.34(2)
  • 29