Multi-vehicle object recognition based on YOLOv7-R
To meet the requirements of intelligent traffic control for multi-vehicle object recognition with high accuracy,an improved YOLOv7-R(you only look once version 7 for the road)algorithm was proposed based on the YOLO algorithm.The global attention mechanism(GAM)was introduced into the backbone network to enhance feature extraction performance.The omni-dimensional dynamic and efficient aggregation network(ODEANet)was used to reconstruct the backbone network,improving algorithm robustness and accuracy.The computationally intensive extended efficient layer aggregation network(E-ELAN)was replaced with the contex-tual transformer network(CoTNet),which guides dynamic attention matrix learning and reduces the floating-point computation.Additionally,the K-means++clustering algorithm was also used to optimize the prior frame and improve the matching degree.Through systematic improvement,the efficiency and accuracy of multi-object recognition for vehicles were improved.Experiments were conducted based on three traffic flows,name-ly,free flow,synchronized flow,and blocked flow.The results show that the YOLOv7-R achieves an average recognition accuracy of 97.13%,94.85%,and 94.60%,respectively,which are 3.65%,3.20%,and 1.40%higher than those of the baseline algorithm.Additionally,the detection frames of the algorithm are 74.63,79.37 and 75.76 frames/s,respectively.Compared with the baseline algorithm,the giga floating-point operations per second(GFLOPS)of YOLOv7-R is reduced by 3.10%,and the number of parameters is reduced by 13.37%.