首页|基于低成本FPGA的深度卷积神经网络加速器设计

基于低成本FPGA的深度卷积神经网络加速器设计

扫码查看
现有的深度卷积神经网络在推理过程中产生大量的层间特征数据.为了在嵌入式系统中保持实时处理,需要大量的片上存储来缓存层间特征映射.本文提出了一种层间特征压缩技术,以显著降低片外存储器访问带宽.此外,本文针对FPGA中BRAM的特点提出了一种通用性的卷积计算方案,并从电路层面做出了优化,既减少了访存次数又提高了DSP的计算效率,从而大幅提高了计算速度.与CPU运行MobileNetV2相比,文章提出的深度卷积神经网络加速器在性能上提升了6.3倍;与同类型的DCNN加速器相比,文章提出的DCNN加速器在DSP性能效率上分别提升了17%和156%.
Design of deep convolutional neural network accelerator based on low-cost FPGA
Existing DCNN generate a large amount of inter-layer feature data during inference. To maintain real-time processing on embedded systems,a significant amount of on-chip storage is required to cache inter-layer feature maps. This paper proposes an inter-layer feature compression technique to significantly reduce off-chip memory access bandwidth. Additionally,a generic convolution computation scheme tailored for BRAM in FPGA is proposed,with optimizations made at the circuit level to reduce memory accesses and improve DSP computational efficiency,thereby greatly enhancing computation speed. Compared to running MobileNetV2 on a CPU,the proposed DCNN accelerator in this paper achieves a performance improvement of 6.3 times;compared to other DCNN accelerators of the same type,the proposed DCNN accelerator in this paper achieves DSP performance efficiency improvements of 17% and 156%,respectively.

deep convolutional neural networkfield programmable gate arraydeep learning

杨统、肖昊

展开 >

合肥工业大学微电子学院 合肥 230601

深度卷积神经网络 现场可编程门阵列 深度学习

国家自然科学基金

61974039

2024

电子测量技术
北京无线电技术研究所

电子测量技术

CSTPCD北大核心
影响因子:1.166
ISSN:1002-7300
年,卷(期):2024.47(10)
  • 5