Experimental study on lightweight sweet potato quality grading based on improved YOLOv8
[Objective]Sweet potatoes,known for their high and consistent yield and nutritional richness,are endorsed by the World Health Organization as an ideal food source,serving both dietary and economic purposes.In 2021,China alone produced a staggering 48 million tons of sweet potatoes,representing approximately 53.82%of the world's total output.Despite its prominence as a leading sweet potato producer,China currently relies heavily on manual labor for classifying flawed sweet potatoes.To enhance the efficiency of sweet potato classification and achieve automatic quality-based classification,a lightweight method based on the improved YOLOv8 model is proposed.[Methods]In this paper,sweet potatoes are divided into three grades,and a data acquisition device for sweet potatoes is built to collect images.Various methods are employed to enhance the dataset of sweet potatoes,resulting in a total of 3472 images.To refine the model,the backbone network in the original YOLOv8s model is replaced with the modified EdgeNeXt,which reduces the model's parameters,computational workload,and overall weight.Afterward,the SCConv convolution is employed to refine the C2fC module,further streamlining the model's complexity.Finally,to address potential performance degradation due to lightweight design,the CARAFE lightweight operator and the FocalC-MPDIoU loss function,based on Focal loss and MPDIoU,are introduced to replace the upsampling module and loss function of the original model and consequently enhance the detection performance of the model.[Results]The results of the ablation experiment reveal that compared with the original model,the improved lightweight model demonstrates a reduction of 38.4%,32.7%,and 37.8%in the number of parameters,calculation workload,and weight,respectively.Additionally,both the accuracy rate and the mean value of the average accuracy rate exhibit an increase of 0.3%and 0.9%,respectively.Finally,the Faster RCNN,SSD,YOLOv3,and YOLOv7-tiny models are compared with the proposed model.The results indicate that the Faster RCNN model exhibits significantly higher complexity compared to other single-stage target detection models,with an average accuracy rate lower than 80%.Compared with the SSD model,the improved model in this paper demonstrates a 15.11%increase in the average accuracy rate and 74.0%,69.5%,and 84.7%reductions in the number of parameters,calculation workload,and model weight,respectively.Similarly,compared with the YOLOv3 model,the improved model shows a 5.8%increase in the average accuracy rate,with reductions of 88.9%,70.9%,and 94.0%in the number of parameters,calculation workload,and weight of the model,respectively.Compared with the YOLOv7-tiny model,the improved model exhibits a 3.4%increase in the average accuracy rate,while the weight of the model decreases by 39.4%.Moreover,compared with the original YOLOv8s model,the improved model exhibits a 0.9%increase in average accuracy,alongside reductions of 38.3%,32.7%,and 37.8%in the number of parameters,calculation workload,and model weight,respectively.[Conclusions]The experiments discussed above highlight the substantial advantages of the proposed model in terms of both model complexity and detection performance.These findings offer valuable insights for the future deployment of the vision module in sweet potato quality classification devices and provide essential technical support for the realization of automatic sweet potato classification based on quality.