Automated surgical guidance systems are increasingly important in clinical settings,driven by advancements in image detection technologies and the growing demand for surgical procedures.However,the need for the system to have real-time visual precision guidance restricts the range of applications in clinical surgery.When a visual signal guides the robotic arm for path planning,the inefficiency of traditional algorithms in low planning can hinder the real-time capability of the system.To address these problems,a navigation control system based on a point-laser-guided surgical robotic arm is proposed.The visual part is based on the YOLOv5 network and preprocessed using the super-resolution reconstruction algorithm.Fusion feature aggregation and single-scale recognition improvement strategies are proposed to achieve rapid and accurate point-laser tracking.For motion planning,a rapidly-exploring random tree(RRT)algorithm that integrates target bias and bidirectional expansion is proposed to constrain the target point attitude using lesion point cloud information for collision pre-detection and planning decision during path generation.The validity and feasibility of the proposed algorithm were verified through experiments,demonstrating that the optimized algorithm achieves an AP50 recognition accuracy of 97.6%and an AP75 recognition accuracy of 83.5%.Moreover,the improved RRT algorithm accurately and rapidly plans the optimal obstacle avoidance path,achieving a 7.2 percentage points improvement over YOLOv5 in traditional video target recognition.
YOLOv5multi-scale integrationrapidly-exploring random treespostural restraintspath planning