首页|ASC-Net:腹腔镜视频中手术器械与脏器快速分割网络

ASC-Net:腹腔镜视频中手术器械与脏器快速分割网络

扫码查看
腹腔镜手术自动化是智能外科的重要组成部分,其前提是腔镜视野下手术器械与脏器实时精准分割.受术中血液污染、烟雾干扰等复杂因素影响,器械与脏器实时精准分割面临巨大挑战,现有图像分割方法均表现不佳.因此提出一种基于注意力感知与空间通道的快速分割网络(ASC-Net),以实现腹腔镜图像中器械和脏器快速精准分割.在UNet架构下,设计了注意力感知与空间通道模块,通过跳跃连接将二者嵌入编码与解码模块间,使网络重点关注图像中相似目标间深层语义信息差异,同时多维度学习各目标的多尺度特征.此外,采用预训练微调策略,减小网络计算量.实验结果表明:在EndoVis2018数据集上的平均骰子系数(mDice)、平均重叠度(mIoU)、平均推理时间(mIT)分别为 90.64%,86.40%和 16.73 ms(60 帧/秒),相比于现有最先进方法,mDice与mIoU提升了 26%与 39%,且mIT降低了 56%;在AutoLaparo数据集上的mDice,mIoU和mIT分别为 93.72%,89.43%和 16.41ms(61 帧/秒),同样优于对比方法.该方法在保证分割速度的同时有效提升了分割精度,实现了腹腔镜图像中手术器械和脏器的快速精准分割,将助力腹腔镜手术自动化快速发展.
ASC-Net:fast segmentation network for surgical instruments and organs in laparoscopic video
Laparoscopic surgery automation is an important component of intelligent surgery,which is based on the premise of real-time and precise segmentation of surgical instruments and organs under the scope of laparoscopy.Hindered by complex factors such as intraoperative blood contamination and smoke interference,the real-time and precise segmentation of surgical instruments and organs faced great challenges.The existing image segmentation methods all performed poorly.Therefore,a fast segmentation network based on attention perceptron and spatial channel(attention spatial channel net,ASC-Net)was proposed to achieve the rapid and precise segmentation of surgical instruments and organs in laparoscopic images.Under the UNet architecture,attention perceptron and spatial channel modules were designed,which were embedded between the network encoding and decoding modules through skip connections.This enabled the network to focus on the deep semantic information differences between similar targets in the images,while learning multi-dimensional features of each target at multiple scales.In addition,a pre-training fine-tuning strategy was adopted to reduce the network computation.Experimental results demonstrated that on the EndoVis2018(Endovis robotic scene segmentation challenge 2018)dataset,the mean Dice coefficient(mDice),mean intersection-over-union(mIoU),and mean inference time(mIT)of this method were 90.64%,86.40%,and 16.73 ms(about 60 frames/s),respectively,which were 26%and 39%higher than existing SOTA methods,with mIT reduced by 56%.On the AutoLaparo(automation in laparoscopic hysterectomy)dataset,the mDice,mIoU,and mIT of this method were 93.72%,89.43%,and 16.41 ms(about 61 frames/s),respectively,outperforming the comparison method.While ensuring segmentation speed,the proposed method effectively enhanced segmentation accuracy,achieving the rapid and precise segmentation of surgical instruments and organs in laparoscopic images and advancing the field of laparoscopic surgery automation.

automated surgerylaparoscopic imagemulti-object segmentationattention perceptronmulti-scale featurespre-training fine-tuning

张新宇、张家意、高欣

展开 >

中国科学技术大学生物医学工程学院(苏州)生命科学与医学部,安徽 合肥 230026

中国科学院苏州生物医学工程技术研究所,江苏 苏州 215163

济南国科医工科技发展有限公司,山东 济南 250101

自动化手术 腹腔镜图像 多目标分割 注意力感知 多尺度特征 预训练微调

国家自然科学基金项目国家重点研发计划项目江苏省重点研发计划项目江苏省重点研发计划项目山东省重点研发计划项目山东省自然科学基金项目

823720522022YFC2408400BE2021663BE20237142021SFGC0104ZR2022QF071

2024

图学学报
中国图学学会

图学学报

CSTPCD北大核心
影响因子:0.73
ISSN:2095-302X
年,卷(期):2024.45(4)