Design and implementation of configurable CNN accelerator based on SoC
A configurable convolutional neural network(CNN)accelerator based on system of chip(SoC)is designed to address the issue that the current design of CNN accelerators can only be deployed within a single field programma-ble gate array(FPGA)and cannot be used across platforms.The accelerator has two characteristics.First,in the circuit design,data bit width,intermediate buffer space size,and multiply accumulate(MAC)array parallelism are optional configuration parameters.By adjusting the resource utilization,the accelerator can adapt to different FPGA hardware.Second,a dynamic data reuse strategy is proposed to reduce the waiting time for data transmission and improve the utilization of the MAC array by dynamically selecting the reuse method based on the difference in total parameter amounts between different reuse methods during data transmission.The scheme is tested on the ZCU104 board,and the experimental results show that when the data bit width is 8,the multiplier array parallelism is 1 024,and the core operation module works at 180 MHz,the peak throughput of the convolution operation array is 180 GOPs,with a power consumption of 3.75 W,and an energy efficiency ratio of 47.97 GOPs·W-1.For the VGG16 network,the average MAC utilization rate of its convolutional layers reaches 84.37%.