首页|Researchers from Florida Polytech University Detail Findings in Artificial Intelligence (Endoscopic Sleeve Gastroplasty: Stomach Location and Task Classification for Evaluation Using Artificial Intelligence)
Researchers from Florida Polytech University Detail Findings in Artificial Intelligence (Endoscopic Sleeve Gastroplasty: Stomach Location and Task Classification for Evaluation Using Artificial Intelligence)
扫码查看
点击上方二维码区域,可以放大扫码查看
原文链接
NETL
NSTL
Springer Nature
Current study results on Artificial Intelligence have been published. According to news originating from Lakeland, Florida, by NewsRx correspondents, research stated, “PurposeWe have previously developed grading metrics to objectively measure endoscopist performance in endoscopic sleeve gastroplasty (ESG). One of our primary goals is to automate the process of measuring performance.” Funders for this research include NIH National Institute of Biomedical Imaging & Bioengineering (NIBIB), National Institutes of Health (NIH) USA. Our news journalists obtained a quote from the research from Florida Polytech University, “To achieve this goal, the repeated task being performed (grasping or suturing) and the location of the endoscopic suturing device in the stomach (Incisura, Anterior Wall, Greater Curvature, or Posterior Wall) need to be accurately recorded.MethodsFor this study, we populated our dataset using screenshots and video clips from experts carrying out the ESG procedure on ex vivo porcine specimens. Data augmentation was used to enlarge our dataset, and synthetic minority oversampling (SMOTE) to balance it. We performed stomach localization for parts of the stomach and task classification using deep learning for images and computer vision for videos.ResultsClassifying the stomach’s location from the endoscope without SMOTE for images resulted in 89% and 84% testing and validation accuracy, respectively. For classifying the location of the stomach from the endoscope with SMOTE, the accuracies were 97% and 90% for images, while for videos, the accuracies were 99% and 98% for testing and validation, respectively. For task classification, the accuracies were 97% and 89% for images, while for videos, the accuracies were 100% for both testing and validation, respectively.ConclusionWe classified the four different stomach parts manipulated during the ESG procedure with 97% training accuracy and classified two repeated tasks with 99% training accuracy with images. We also classified the four parts of the stomach with a 99% training accuracy and two repeated tasks with a 100% training accuracy with video frames.”
LakelandFloridaUnited StatesNorth and Central AmericaArtificial IntelligenceBariatric SurgeryEmerging TechnologiesGastroplastyHealth and MedicineMachine LearningFlorida Polytech University