机场飞行区现使用的场面监视方法存在着定位偏差较大、不稳定、易跳变、皆为点源定位等问题。针对这些问题,设计了基于视觉图像的飞行区监视方法,实现快速准确的目标检测和轮廓定位,使飞行区监视更加稳定精确。提出了一种基于MobileNetV3和YOLOv5的网络模型(以下称为MobileNetV3-YOLOv5),即在 YOLOv5 的主干中使用MobileNetV3,来提高对目标的检测速度和准确度;提出了一种基于优化特征点提取的改进定向快速旋转简报(Oriented FAST and Rotated BRIEF,ORB)算法,将图像分割成多个区域,分别提取每个区域的特征点,从而提高目标识别框内区域的特征点识别数量,再进行特征点聚类筛选,最后根据识别目标类型采用最小包围盒进行轮廓划分,得到目标的轮廓定位。试验结果表明:MobileNetV3-YOLOv5方法对比原始YOLOv5模型,在识别目标准确率方面提升5百分点,在效率方面提升14张/s;同时在0~60 m的范围内,轮廓估计误差仅为2。9%;体现了所提出的监视方法的有效性,可以提升飞行区监视定位准确性和运行安全性。
Research on improvement of large target surveillance method in airfield area based on visual image
The methods of monitoring ground moving targets currently used in the airport airfield area have some drawbacks:the positioning is unstable and has large deviations.Moreover,the positioning results are all point source positioning,it can hardly meet the demand of complex operation scenarios in the airfield area gradually.To solve these problems,an airfield area monitoring method is designed based on visual images,which can achieve fast and accurate target monitoring and contour positioning.The stability and consistency of videos also allow for more stable and accurate monitoring over the airfield.Firstly,a network model based on MobileNetV3 and YOLOv5(hereinafter referred to as MobileNetV3-YOLOv5)is established to monitor moving targets in the airfield area.With the lightweight MobileNetV3 model in YOLOv5's backbone network,the processing speed at the input side can be increased,thereby enhancing the speed and accuracy of target monitoring.Next,an improved Oriented FAST and Rotated BRIEF(ORB)algorithm based on optimized feature point extraction is proposed.The entire image is divided into multiple regions,and different numbers of feature points are extracted from each region to improve the number of feature point recognition in a certain region within the target recognition framework.Finally,the feature points are clustered,and the contour is divided by the minimum bounding box according to the recognized object type,to obtain the approximate contour of the target.The ultimate goal of contour data annotation is accomplished while precise positioning.Results show that the MobileNetV3-YOLOv5 model improves the accuracy of target recognition by 5%and the efficiency by 14 pieces per second compared with the original YOLOv5 model.Also,the error of the contour estimation in the range of 0-60 m is only 2.9%.Test results verify the effectiveness of the monitoring method proposed in this paper,which can improve the accuracy and operational safety of monitoring and positioning in the airfield area.
safety engineeringflight area operationsvisual imagesobject detectionOriented FAST and Rotated BRIEF(ORB)algorithm