首页|Data on Pattern Recognition and Artificial Intelligence Discussed by Researchers at University of Sriwijaya (Denoised Non-local Means With Bddu-net Architecture for Robust Retinal Blood Vessel Seg- mentation)

Data on Pattern Recognition and Artificial Intelligence Discussed by Researchers at University of Sriwijaya (Denoised Non-local Means With Bddu-net Architecture for Robust Retinal Blood Vessel Seg- mentation)

扫码查看
2024 FEB 20 (NewsRx) – By a News Reporter-Staff News Editor at Robotics & Machine Learning Daily News Daily News – Data detailed on Machine Learning - Pattern Recognition and Artificial Intelligence have been presented. According to news reporting originating in Palembang, Indonesia, by NewsRx journalists, research stated, “Retinal blood vessels can be obtained by image segmentation. This study proposes combining image enhancement and segmentation to obtain retinal blood vessels.” The news reporters obtained a quote from the research from the University of Sriwijaya, “The image enhancement stages use CLAHE and Denoised Non-Local Means to increase contrast and reduce noise on the original image, and Bottom-Hat (BTH) filtering is used to lighten dark features in the image so the features become lighter and darken the bright features in the image. Bottom Hat is applied to make the features of the blood vessels in the retinal image more visible. The segmentation architecture proposes BDDU-Net architecture which combines U-Net in the encoder part, DenseNet in the bridge part, and Bi-ConvLSTM in the decoder part. Image enhancement performance results are PSNR and SSIM. The PSNR is more than 40 dB on both the DRIVE and STARE datasets. The SSIM results are close to 1 on the DRIVE and STARE datasets. These results show that the image enhancement stages in the proposed method can enhance the quality of the original image. The segmentation performance results of BDDU- Net architecture are measured based on accuracy, sensitivity, specificity, IoU, and F1-Score. The DRIVE dataset obtained 95.578% for accuracy, 85.75% for sensitivity, 96.75% for specificity, 67.407% for IoU, and 80.53% for F1-Score. The STARE dataset obtained 97.63% for accuracy, 84.33% for sensitivity, 98.66% for specificity, 75.67% for IoU, and 86.15% for F1-Score.”

PalembangIndonesiaAsiaPattern Recognition and Artificial IntelligenceMachine LearningUniversity of Sriwijaya

2024

Robotics & Machine Learning Daily News

Robotics & Machine Learning Daily News

ISSN:
年,卷(期):2024.(Feb.20)
  • 63