首页|基于透明物体深度补全的机器人抓取实验设计

基于透明物体深度补全的机器人抓取实验设计

扫码查看
该文针对透明物体的抓取问题,进行了机器人抓取综合性实验设计。提出了基于几何约束的透明物体深度补全算法,研究了基于Deep Learning的深度补全,利用语义分割图对输入数据进行预处理,联合表面法线和遮挡边缘作为几何约束来预测缺失的深度值;选取基于RGBD(red green blue depth map)信息融合的抓取检测网络GR_ConvNet对透明物体进行抓取检测。实验数据表明,TransLab算法体现出较好的抗干扰能力,能够很好地突显所有物体的形状和轮廓,利用补全后的深度图进行训练的模型精度更高。通过该实验设计能够帮助学生理解深度补全、语义分割等基本理论和方法,培养学生将理论联系实际的能力和对科学研究的兴趣。
Experimental design of robot grasping based on depth completion of transparent objects
[Objective]Transparent objects are extensively used in daily life because of their aesthetic appeal,affordability,and practicality.Research on the recognition and grasping of transparent objects can broaden the application scope of mobile,home service,and industrial robots.Transparency is characterized by a lack of texture and high light transmittance.The appearance of transparent objects is influenced by the background pattern,and standard commercial red-green-blue-depth(RGBD)cameras cannot accurately capture their depth information.Therefore,detecting and grasping these objects using machine-vision technology presents significant challenges.[Methods]Current grasping detection algorithms rely heavily on the quality of depth maps.Owing to their unique visual properties,transparent objects pose challenges for standard 3D sensors in accurately estimating their depth.Existing depth cameras can capture incorrect or missing depths in areas with highlights or transparent objects.Addressing subsequent grasping issues requires obtaining accurate depth information for transparent objects.During the collection of depth information for transparent objects,two types of errors commonly occur-incorrect collection of background depth and presence of voids,resulting in missing depth values.We studied a deep learning-based deep completion algorithm that preprocesses input data using semantic segmentation maps and combines surface normals and occluded edges as geometric constraints to predict missing depth values.To enhance the depth completion effectiveness,we optimized the data preprocessing algorithm.The ClearGrasp model employs a DeepLabV3+network for segmentation.To reduce segmentation errors,we used TransLab instead of DeepLabV3+.Furthermore,we adopted the GR ConvNet grabbing detection model,which integrates the RGBD images at the pixel level.This model incorporates the depth map as the fourth-dimensional channel of the RGB image and feeds it into a convolutional neural network to generate pixel-level predictions of the grabbing quality,grabbing width,and grabbing angle.The depth map is enhanced using a deep completion model,and the grabbing boxes are manually annotated.Subsequently,the processed data are forwarded to the grabbing detection network.The grabbing detection experiment demonstrates the essential role of deep completion.[Results]The results of the deep completion experiment demonstrate that the TransLab+ClearGrasp model performs better than the other two algorithms in various indicators,with RMSE(root mean square error)and MAE(mean absolute error)lower than the ClearGrasp model,and δ(t)performance is also higher than ClearGrasp's under the three thresholds.Optimizing the preprocessing algorithm considerably enhances the accuracy of the deep completion model.The grabbing experiment results indicate that the model trained using the completed depth map shows a relatively high grabbing success rate,particularly the GR-ConvNet grabbing success rate of 73.33%,which is 12.5%higher than that of the model trained using the original depth map.[Conclusions]The experimental data demonstrate that the TransLab algorithm exhibits strong anti-interference capabilities and effectively enhances the shapes and contours of all objects.The model trained using the completed depth map achieves higher accuracy.These findings highlight the importance of completing depth images of transparent objects in practical applications.

transparent objectdepth complementationgeometric constraintsemantic segmentationgrab experiment

张倩、朱美强、王惠、代伟、李海港

展开 >

中国矿业大学 信息与控制工程学院,江苏 徐州 221116

透明物体 深度补全 几何约束 语义分割 抓取实验

国家自然科学基金项目自动化教指委教育教学改革研究项目江苏省自然科学基金优秀青年基金项目中国矿业大学教学研究项目中国矿业大学教学研究项目中国矿业大学教学研究项目

62176259202144BK202000862020ZD052022ZXKC082022KCSZ03

2024

实验技术与管理
清华大学

实验技术与管理

CSTPCD北大核心
影响因子:1.651
ISSN:1002-4956
年,卷(期):2024.41(8)