已有火龙果检测方法仅针对单一性能指标,难以满足农业真实场景的需要,为此提出了一种精准高效的火龙果品质与成熟度双指标检测方法.首先,利用自适应鉴别器增强的样式生成对抗网络扩充火龙果图像,建立复杂环境火龙果数据集.采用伽马变换进行图像增强,凸显火龙果特征,降低光照环境的影响.其次,提出了YOLO v7-RA模型.通过设计ELAN_R3 替代ELAN(Efficient layer aggregation network)模块,减少主干网络对重复特征的提取,增强模型对细粒度特征关注度,提高双指标检测准确率.融入混合注意力机制(Mixture of self-attention and convolution,ACmix),增强模型对特征的提取和整合能力,降低杂乱背景信息干扰.最后,通过实验验证了YOLO v7-RA模型的检测性能.实验结果表明,该方法精准率为 97.4%,召回率为 97.7%,mAP0.5 为 96.2%,FSP 为74 f/s,实现了检测精度与检测速度的均衡.即使在遮挡情况下,YOLO v7-RA模型检测精准率仍达到91.4%,具有较好泛化能力,能够为火龙果智能化采摘的发展提供技术支持.
Dual-index Detection Method of Pitaya Quality and Maturity Based on YOLO v7-RA
Research on pitaya detection methods is the basis and prerequisite for realizing intelligent picking.Existing pitaya detection methods only target a single performance indicator,which is difficult to meet the needs of real agricultural scenarios.Therefore,an accurate and efficient dual-index detection method for pitaya quality and maturity was proposed.Firstly,the adaptive discriminator enhanced style generation adversarial network algorithm was used to expand the pitaya image and establish a pitaya dataset.The image was enhanced by gamma transform to highlight the characteristics of pitaya and reduce the impact of lighting environment.Secondly,the YOLO v7-RA model was proposed,by designing ELAN_R3 to replace the efficient layer aggregation network(ELAN)module to reduce the extraction of repetitive features by the backbone network.This enhanced the model's attention to fine-grained features and improved the accuracy of dual-index detection.The mixture of self-attention and convolution(ACmix)was applied to enhance the model's ability to extract and integrate feature information,and reduce the interference of cluttered background information.Finally,the detection level of the YOLO v7-RA model was verified through experiments.Experimental results showed that the precision rate of the method was 97.4%,the recall rate was 97.7%,the mAP0.5 was 96.2%,and FSP was 74 f/s.A balance between detection accuracy and detection speed was achieved.Even under occlusion,the YOLO v7-RA model detection accuracy still reached 91.4%.The model had good generalization ability to provide strong technical support for the development of intelligent pitaya picking.