Visual SLAM Based on Deep Learning in Dynamic Environment
Traditional simultaneous visual localization and mapping(SLAM)technology is designed based on the assumption of a static environment.In a dynamic environment,the movement of a moving target will cause feature matching failure,which will affect the estimation of pose.A visual SLAM system combined with convolutional neural network is proposed.By adding a dynamic target detection thread of convolutional neural network combined with attention mechanism to the front end of the RGB-D mode of ORB-SLAM2 system,the dynamic target area is eliminated when extracting image feature points.Static feature points are used to complete accurate estimation of camera pose.The simulation experiments are tested under the TUM dynamic dataset,and the results of the multiple tests show that the improved algorithm improves the positional accuracy by more than 90%compared with the original algorithm,and the algorithm can meet the real-time requirements.
simultaneous localization and mappingdeep learningpose estimationdynamic scenetarget detection