Design of lightweight network hardware optimization considering edge computing
With the booming development of mobile internet and the Internet of Things,a large number of intelligent terminal devices have generated massive amounts of data,which requires real-time intelligent analysis and processing at the edge of the network.Therefore,researching hardware optimization solutions for lightweight neural networks to achieve edge intelligence has become a current research hotspot.This article focuses on the design and optimization strategies of lightweight network hardware based on model compression and quantization,fixed-point computing replacing floating-point computing,data flow optimization,storage optimization,and parallel computing.In terms of FPGA implementation,the use of pipeline parallelism and BRAM improves the execution efficiency of MobileNetV2.The results show that compared to the original model,the optimized model significantly reduces resource utilization indicators such as parameter count and memory usage,while performance indicators such as CPU utilization and inference speed are significantly improved.The study validated the proposed optimization methods and provided a reference for deploying deep neural networks to edge devices.