A New Paradigm for Intelligent Traffic Perception: A Traffic Sign Detection Architecture for the Metaverse
Traffic sign detection plays an important role in the safe and stable operation of intelligent transportation systems and intelligent driving. Unbalanced data distribution and monotonous scene will lead to poor model performance, but building a complete real traffic scene dataset requires expensive time and labor costs. Based on this, a new metaverse-oriented traffic sign detection paradigm is proposed to alleviate the dependence of existing methods on real data. Firstly, by establishing the scene mapping and model mapping between the metaverse and the physical world, the efficient operation of the detection algorithm between the virtual and real worlds is realized. As a virtualized digital world, Metaverse can complete custom scene construction based on the physical world, and provide massive and diverse virtual scene data for the model. At the same time, knowledge distillation and the mean teacher model is combined in this paper to establish a model mapping to deal with the problem of data differences between the metaverse and the physical world. Secondly, in order to further improve the adaptability of the training model under the Metaverse to the real driving environment, a heuristic attention mechanism is designed to improve the generalization ability of the detection model by locating and learning features. The proposed architecture is experimentally verified on the CURE-TSD, KITTI, Virtual KITTI (VKITTI) datasets. Experimental results show that the proposed metaverse-oriented traffic sign detector has excellent detection results in the physical world without relying on a large number of real scenes, and the detection accuracy reaches 89.7%, which is higher than other detection methods of recent years.