北京大学学报(自然科学版)2024,Vol.60Issue(5) :786-798.DOI:10.13209/j.0479-8023.2024.066

适配PAICORE2.0的硬件编码转帧加速单元设计

Design of Acceleration Unit of Encoding and Frame Generation for PAICORE2.0

丁亚伟 曹健 李琦彬 冯硕 杨辰涛 王源 张兴
北京大学学报(自然科学版)2024,Vol.60Issue(5) :786-798.DOI:10.13209/j.0479-8023.2024.066

适配PAICORE2.0的硬件编码转帧加速单元设计

Design of Acceleration Unit of Encoding and Frame Generation for PAICORE2.0

丁亚伟 1曹健 1李琦彬 1冯硕 1杨辰涛 1王源 2张兴3
扫码查看

作者信息

  • 1. 北京大学软件与微电子学院,北京 102600
  • 2. 北京大学集成电路学院,北京 100871
  • 3. 北京大学集成电路学院,北京 100871;北京大学深圳研究生院集成微系统科学工程与应用重点实验室,深圳 518055
  • 折叠

摘要

为了解决北京大学脉冲神经网络芯片PAICORE2.0类脑终端系统中软件编码和转帧过程速度较慢的问题,提出一种硬件加速方法.通过增加硬件加速单元,将Xilinx ZYNQ的处理系统PS端串行执行的软件编码转帧过程转移到可编程逻辑PL端的数据通路中流水化并行执行.硬件加速单元主要包含高度并行的卷积单元、参数化的脉冲神经元和位宽平衡数据缓冲区等.实验结果表明,该方法在几乎不增加数据通路传输延迟的前提下,可以消除软件编码和转帧过程的时间开销.在CIFAR-10图像分类的例子中,与软件编码和转帧方法相比,硬件编码转帧模块仅增加9.3%的LUT、3.7%的BRAM、2.6%的FF、0.9%的LUTRAM、14.9%的DSP以及14.6%的功耗,却能够实现约8.72倍的推理速度提升.

Abstract

An edge computing system was designed by the spiking neural network chip PAICORE2.0 of Peking University,in conjunction with Xilinx ZYNQ. However,the software encoding and frame generation processes on the processing system (PS) side is slow and limits the performance of the system. Therefore,a hardware acceleration method is proposed. The software encoding and frame generation processes,which is serially executed on the PS side,is moved to the data path on the programmable logic (PL) side for pipelined parallel execution. The hardware acceleration unit mainly consists of highly parallel convolution units,parameterizable spiking neurons,width-balanced data buffers and other modules. The results show that the method removes the time overhead of software encoding and frame generation without increasing the data path transmission delay. In the example of CIFAR-10 image classification,compared with software encoding and frame generation,the hardware encoding and frame generation module results in only a marginal increase in resource utilization—9.3% more Look-Up Tables (LUTs),3.7% more Block RAMs (BRAMs),2.6% more flip-flops (FFs),0.9% more LUTRAMs,and 14.9% more digital signal processors (DSPs),as well as a 14.6% increase in power consumption. However,it achieves approximately an 8.72-fold improvement in inference speed.

关键词

脉冲神经网络芯片/PAICORE2.0/ZYNQ/脉冲编码/硬件加速/卷积加速单元

Key words

spike neural network chip/PAICORE2.0/ZYNQ/spike encoding/hardware acceleration/convolutional acceleration unit

引用本文复制引用

基金项目

深圳市科技创新委员会基金(KQTD20200820113105004)

出版年

2024
北京大学学报(自然科学版)
北京大学

北京大学学报(自然科学版)

CSTPCDCSCD北大核心
影响因子:0.785
ISSN:0479-8023
参考文献量19
段落导航相关论文