Monocular depth estimation based on image and sparse laser point fusion
In recent years,with the rapid development of deep learning,a large number of monocular depth estimation algorithms have emerged.However,the lack of geometric constraints such as disparity,the depth prediction accuracy limits the further improvement of the depth prediction accuracy of the algorithm and fails to meet the needs of practical applications,soa depth estimation network that integrates images with sparse laser points is proposed in this paper.Firstly,the depth prediction accuracy is improved by inputting the high-precision ranging results of a small number of laser points in real time.Secondly,in order to solve the problem of uneven distribution of LiDAR points from self-col-lected data,on the basis of the supervised network,the relative position estimation network is added to be trained joint-ly with the depth estimation network.And two loss functions of luminance consistency and depth reprojection are add-ed at the same time.Finally,the self-collected data are utilized to conduct the experimental analysis,and the experi-mental results show that when 160 laser points areused,the absolute relative error of depth prediction can be reduced from 10.1%to 7.6%,and when 1280 laser points are used,the change of the absolute relative error of depth predic-tion tends to stabilize to 4.1%.