首页|A computationally efficient and lightweight model for high-accuracy OCT image classification
A computationally efficient and lightweight model for high-accuracy OCT image classification
扫码查看
点击上方二维码区域,可以放大扫码查看
原文链接
NETL
NSTL
Abstract Current automated retinal OCT classification systems encounter deployment limitations due to excessive computational complexity. This paper presents Light-AP-EfficientNet, a lightweight architecture employing adaptive pooling for enhanced feature representation. We first optimize the convolutional layers of EfficientNet to eliminate redundant structures, significantly reducing the model’s parameter count. Then, adaptive pooling layers are integrated to enable the model to capture both global and local features, improving its classification performance. Experimental results show that Light-AP-EfficientNet achieves 99.7% accuracy, 99.7% Recall, and 0.997 F1 Score on the UCSD dataset, requiring only 17% of the parameters of ShuffleNetV2 and 19% of the computational load of MobileNetV2. The model processes a single image in just 0.028 seconds on a CPU and 0.009 seconds on a GPU. Additionally, it outperforms recent models in accuracy, precision, and recall, with improvements of up to 4.5% in accuracy, 5.42% in precision, and 4.5% in F1 Score. With high accuracy and reduced hardware requirements, Light-AP-EfficientNet is ideal for computer vision tasks in resource-constrained environments.
Computer VisionRetinal OCT imagesEfficientNetAdaptive poolingLightweight networks