Design of an end-to-end autonomous driving method based on spatiotemporal feature fusion
For the existing end-to-end autonomous driving models with high complexity,numerous parameters,large computation,and the inability of network models consisting solely of convolutional neural networks to handle temporal features,a novel end-to-end autonomous driving model called M-GRU was proposed.The model consisted of an improved MobileNetV2 network and a gated recurrent unit(GRU)network.The improved MobileNetV2 network added an attention module based on the original MobileNetV2,which was designed to enhance the network's attention to important features by weighting feature information closely related to driving decisions.The two network modules in the M-GRU model extracted spatial and temporal features of images and predicted autonomous driving behaviors through behavior cloning.The M-GRU model was trained and tested in a simulator,and compared with NVIDIA's PilotNet model.The results showed that the loss function value of M-GRU was lower than that of PilotNet,and the small car controlled by the M-GRU model was able to complete tasks such as straight driving,turning,and acceleration/deceleration on the road,as well as driving tasks in both simple and challenging modes,which provided better performance.
automatic drivingend to endMobileNetV2gated recurrent unitattention mechanism