首页|Researchers from Beijing Institute of Technology Provide Details of New Studies and Findings in the Area of Robotics (VID-SLAM: Robust Pose Estimation with RGBD-Inertial Input for Indoor Robotic Localization)
Researchers from Beijing Institute of Technology Provide Details of New Studies and Findings in the Area of Robotics (VID-SLAM: Robust Pose Estimation with RGBD-Inertial Input for Indoor Robotic Localization)
扫码查看
点击上方二维码区域,可以放大扫码查看
原文链接
NETL
NSTL
Mdpi
Researchers detail new data in robotics. According to news reporting originating from Beijing, People's Republic of China, by NewsRx correspondents, research stated, “This study proposes a tightly coupled multi-sensor Simultaneous Localization and Mapping (SLAM) framework that integrates RGB-D and inertial measurements to achieve highly accurate 6 degree of freedom (6DOF) metric localization in a variety of environments.” Funders for this research include National Natural Science Foundation of China; Shenyang Science And Technology Project; Educational Department of Liaoning Provincial Basic Research Project. Our news editors obtained a quote from the research from Beijing Institute of Technology: “Through the consideration of geometric consistency, inertial measurement unit constraints, and visual re-projection errors, we present visual-inertial-depth odometry (called VIDO), an efficient state estimation back-end, to minimise the cascading losses of all factors. Existing visual-inertial odometers rely on visual feature-based constraints to eliminate the translational displacement and angular drift produced by Inertial Measurement Unit (IMU) noise. To mitigate these constraints, we introduce the iterative closest point error of adjacent frames and update the state vectors of observed frames through the minimisation of the estimation errors of all sensors. Moreover, the closed-loop module allows for further optimization of the global attitude map to correct the long-term drift. For experiments, we collect an RGBD-inertial data set for a comprehensive evaluation of VID-SLAM. The data set contains RGB-D image pairs, IMU measurements, and two types of ground truth data.”
Beijing Institute of TechnologyBeijingPeople's Republic of ChinaAsiaEmerging TechnologiesMachine LearningRoboticsRobots