Spiking neural network(SNN)has received broad attention for its relatively lower computational energy consumption compared to artificial neural network(ANN).However,most conventional SNNs use a synchronous computation paradigm,whereby multiple timesteps are commonly used to simulate the dynamic process of informa-tion integration,resulting in some problems such as extended inference delay and increased computational energy consumption,which lead to a serious efficiency discount during the realistic application of edge intelligent devices.In this paper,we propose an adaptive timestep improved spiking neural network(ATSNN)algorithm,which can automatically choose a proper inference timestep based on different features of input samples,and regulate the im-portance of different timesteps by designing an innovative time-dependent loss function.Besides,a low energy con-sumption SNN accelerator is designed based on the characteristics of ATSNN mentioned above to support applica-tions and deployments of ATSNN algorithm on some mature frameworks(such as VGG and ResNet).The results of software and hardware experiments on standard datasets such as CIFAR10,CIFAR100,and CIFAR10-DVS show that,compared to conventional SNN algorithms using static timesteps,the ATSNN algorithm can reach a compar-able accuracy but with a decreased inference delay(around 36.7%~58.7%)and reduced computational complexity(around 33.0%~57.0%).Furthermore,the running results on the hardware simulator indicate that the computa-tional energy consumption of ATSNN is only around 4.43%~7.88%of GPU RTX 3090Ti.It shows great advant-ages of brain-inspired neuromorphic hardware and software.
关键词
脉冲神经网络/低功耗推理/高效训练/低延迟
Key words
Spiking neural network(SNN)/low power consumption inference/efficient training/low latency