首页|基于边缘部署低功耗的神经网络加速器

基于边缘部署低功耗的神经网络加速器

扫码查看
卷积神经网络作为一种处理网络数据的深度学习模型,广泛的应用于自动驾驶、航空航天等行业.而随着数据量的增长,卷积网络的结构也变得越来越复杂,对于卷积网络这种计算和资源密集型网络如何部署在低功耗、资源少的边缘设备上就成为了一种困难.而FPGA由于其具有高的并行性和低功耗,可以作为一种边缘部署的设备.在这基础上,提出了一种针对于LeNet-5轻量网络的加速器,利用流水线并行加速和循环展开对FPGA的并行计算最大化,然后使用Vitis HLS将高级编程语言转变为硬件描述语言,再利用Vitis IDE进行软件驱动的编写.实验结果表明,相对于在CPU、GPU上进行网络推理,在ZYNQ上FPGA进行网络推理,在检测速率相近的情况下,功耗减少了8倍,这使得神经网络的边缘部署多了一种选择.
Low Power Neural Network Accelerators Based on Edge Deployment
Convolutional neural networks,as a deep learning model for processing network data,are widely used in in-dustries such as autonomous driving and aerospace.And with the growth of data volume,the structure of convolution-al network becomes more and more complex,for the convolutional network this kind of computation and resource-in-tensive network how to be deployed on the edge devices with low power consumption and few resources becomes a kind of difficulty.And FPGA can be used as an edge deployment device due to its high parallelism and low power consumption.On this basis,a gas pedal for LeNet-5 lightweight networks is proposed to maximize parallel computa-tion on FPGA using pipelined parallel acceleration and loop unfolding,and then use Vitis HLS to transform the high-level programming language into a hardware description language,and then use the Vitis IDE to write the software driver.Experimental results show that network inference on FPGA on ZYNQ reduces power consumption by a factor of 8 with similar detection rates,relative to network inference on CPU and GPU,which makes it an additional option for edge deployment of neural networks.

convolutional neural networkedge deploymentlow power consumptionFPGApipelineloop unrollingHLS

周诗云、钱松荣、卫少东、郑鑫

展开 >

贵州大学 机械工程学院,贵阳 550025

贵州大学 公共大数据国家重点实验室,贵阳 550025

卷积神经网络 边缘部署 低功耗 FPGA 流水线 循环展开 HLS

贵州光电子信息与智能化应用国际联合研究中心项目

黔科合平台人才20195802号

2024

自动化与仪表
天津市工业自动化仪表研究所 天津市自动化学会

自动化与仪表

CSTPCD
影响因子:0.548
ISSN:1001-9944
年,卷(期):2024.39(7)
  • 3