针对视觉同步定位与建图(SLAM)算法在低光照环境下轨迹误差较大与效率较低的问题,提出一种基于ORB(oriented fast and rotated brief)-SLAM2 算法的融合图像亮度增强模块与IMU(inertial measurement unit)信息的视觉SLAM算法.设计一种可依据图像亮度来实现自适应变化的Gamma校正因子,对亮度阈值筛选出的低光照图像进行自适应亮度增强后,再提取特征点,旨在增加算法在低光照环境下生成关键帧的数量;对提取到的特征点利用LK(Lucas-Kanade)光流法进行追踪,并预估初始位姿,同时利用视觉加IMU信息的方式优化位姿,以提高算法的运行效率与鲁棒性.在公共数据集与冰达ROS(robot operating system)机器人上进行实验,结果表明:改进算法相比于ORB-SLAM2算法,平均绝对轨迹误差降低35%,平均相对位姿误差降低25%,平均每帧追踪时间减少24%,证明改进算法的精度与效率更高,对于低光照环境下的应用具有较好的实用价值.
Visual SLAM Algorithm Combining Image Brightness Enhancement Module and IMU Information
Aiming at the problems of large trajectory error and low efficiency of visual simultaneous localization and mapping(SLAM)algorithms in low light environments,a visual SLAM algorithm based on oriented fast and rotated brief(ORB)-SLAM2 is proposed by fusing an image brightness enhancement module and inertial measurement unit(IMU)information.Aiming to improve the number of key frames generated by the algorithm in low-light environments,a Gamma correction factor that can adaptively change according to image brightness is designed to extract the feature points after adaptively adjusting the brightness of the low-light image selected by the brightness threshold.The extracted feature points are tracked using Lucas-Kanade(LK)optical flow,to estimate the initial pose,which is further optimized by using vision and IMU information to improve the operation efficiency and robustness of the algorithm.Experiments are conducted on public datasets and Bingda robot operating system(ROS).The results show that compared with ORB-SLAM2,the average absolute trajectory error,average relative pose error,and average tracking time per frame of the improved algorithm are reduced by 35%,25%,and 24%,respectively,which proves that the accuracy and efficiency of the algorithm presented in this paper are higher,and the algorithm has good practical value for applications in low light environments.