In response to the susceptibility of traditional visual simultaneous localization and mapping(SLAM)to disturbances from moving objects in dynamic scenes,a visual SLAM algorithm based on object detection networks is proposed.This algorithm introduces a module for detecting and rejecting dynamic feature points in the tracking thread of ORB-SLAM2,thereby utilizing static feature points for pose estimation.Firstly,YOLOv7 is chosen as the backbone network for object detection,combined with GhostNet lightweight convolutional networks and convolution with SE attention mechanism(Conv_SE)for effective environmental detection.Secondly,the detected objects are processed through classification,rejecting feature points associated with dynamic objects,and employing geometric constraints to further identify and remove potential moving objects.Finally,only static feature points are used for feature matching and pose estimation.Validation results on the TUM dataset indicate that compared to ORB-SLAM2,the proposed algorithm achieves an average reduction of 96.5%in the root mean square error(RMSE)of absolute trajectory error(ATE)in dynamic walk sequences and shows improvement in other dynamic sequences as well.Experimental evidence demonstrates that this algorithm significantly enhances the localization accuracy and robustness of the system in dynamic scenarios.