Objective Accurate segmentation of brain tumors is a challenging clinical diagnosis task,especially in assess-ing the degree of malignancy.The magnetic resonance imaging(MRI)of brain tumors exhibits various shapes and sizes,and the accurate segmentation of small tumors plays a crucial role in achieving accurate assessment results.However,due to the significant variability in the shape and size of brain tumors,their fuzzy boundaries make tumor segmentation a chal-lenging task.In this paper,we propose a multi-modal MRI brain tumor image segmentation network,named D3D-Net,based on a dual encoder fusion architecture to improve the segmentation accuracy.The performance of the proposed net-work is evaluated on the BraTS2018 and BraTS2019 datasets.Method The paper proposes a network that utilizes multiple encoders and a feature fusion strategy.The network incorporates dual-layer encoders to thoroughly extract image features from various modal combinations,thereby enhancing the segmentation accuracy.In the encoding phase,a targeted fusion strategy is adopted to fully integrate the feature information from both upper and lower sub-encoders,effectively eliminating redundant features.Additionally,the encoding-decoding process employs an expanded multi-fiber module to capture multi-scale image features without incurring additional computational costs.Furthermore,an attention gate is introduced in the process to preserve fine-grained details.We conducted experiments on the BraTS2018,BraTS2019,and BraTS2020 data-sets,including ablation and comparative experiments.We used the BraTS2018 training dataset,which consists of the mag-netic resonance images of 210 high-grade glioma(HGG)and 75 low-grade glioma(LGG)patients.The validation dataset contains 66 cases.The BraTS2019 dataset added 49 HGG cases and 1 LGG case on top of the BraTS2018 dataset.Specifi-cally,BraTS2018 is an open dataset that was released for the 2018 Brain Tumor Segmentation Challenge.The dataset con-tains multi-modal magnetic resonance images of HGG and LGG patients,including T1-weighted,T1-weighted contrast-enhanced,T2-weighted,and fluid-attenuated inversion recovery(FLAIR)image sequences.T1-weighted,T1-weighted contrast-enhanced,T2-weighted,and FLAIR images are all types of MRI sequences used to image the brain.T1-weighted MRI scans emphasize the contrast between different tissues on the basis of the relaxation time of the hydrogen atoms in the brain.In T1-weighted images,the cerebrospinal fluid appears dark,while the white matter appears bright.This type of scan is often used to detect structural abnormalities in the brain,such as tumors,and assess brain atrophy.Tl-weighted contrast-enhanced MRI scans involve the injection of a contrast agent into the bloodstream to improve the visualization of certain types of brain lesions.This type of scan is particularly useful in detecting tumors because the contrast agent tends to accumulate in abnormal tissues.T2-weighted MRI scans emphasize the contrast between different tissues on the basis of the water content in the brain.In T2-weighted images,the cerebrospinal fluid appears bright,while the white matter appears dark.This type of scan is often used to detect areas of brain edema or inflammation.FLAIR MRI scans are similar to T2-weighted images but with the suppression of signals from the cerebrospinal fluid.This type of scan is particularly use-ful in detecting abnormalities in the brain that may be difficult to visualize with other types of scans,such as small areas of brain edema or lesions in the posterior fossa.The dataset is divided into two subsets:the training and validation datasets.The training dataset includes 285 cases,including 210 HGG and 75 LGG patients.The validation dataset includes 66 cases.Result The proposed D3D-Net exhibits superior performance compared with the baseline 3D U-Net and DMF-Net models.Specifically,on the BraTS2018 dataset,the D3D-Net achieves a high average Dice coefficient of 79.7%,89.5%,and 83.3%for enhancing tumors,whole tumors,and tumor core segmentation,respectively.Result shows the effective-ness of the proposed network in accurately segmenting brain tumors of different sizes and shapes.The D3D-Net also demon-strated an improvement in segmentation accuracy compared with the 3D U-Net and DMF-Net models.In particular,com-pared with the 3D U-Net model,D3D-Net showed a significant improvement of 3.6%,1.0%,and 11.5%in enhancing tumors,whole tumors,and tumor core segmentation,respectively.Additionally,compared with the DMF-Net model,D3D-Net respectively demonstrated an improvement of 2.2%,0.2%,and 0.1%in the same segmentation tasks.On the BraTS2019 dataset,D3D-Net also achieved high accuracy in segmenting brain tumors.Specifically,the network achieved an average Dice coefficient of 89.6%,91.4%,and 92.7%for enhancing tumors,whole tumors,and tumor core segmenta-tion,respectively.The improvement in segmentation accuracy compared with the 3D U-Net model was 2.2%,0.6%,and 7.1%,respectively,for enhancing tumors,whole tumors,and the tumor core segmentation.Results suggest that the pro-posed D3D-Net is an effective and accurate approach for segmenting brain tumors of different sizes and shapes.The net-work's superior performance compared with the 3D U-Net and DMF-Net models indicates that the dual encoder fusion architecture,which fully integrates multi-modal features,is crucial for accurate segmentation.Moreover,the high accu-racy achieved by D3D-Net in both the BraTS2018 and BraTS2019 datasets demonstrates the robustness of the proposed method and its potential to aid in the accurate assessment of brain tumors,ultimately improving clinical diagnosis.On the BraTS2020 dataset,the average Dice values for enhanced tumor,whole tumor,and tumor core increased by 2.5%,1.9%,and 2.2%,respectively,compared with those on 3D U-Net.Conclusion The proposed dual encoder fusion net-work,D3D-Net,demonstrates a promising performance in accurately segmenting brain tumors from MRI images.The net-work can improve the accuracy of brain tumor segmentation,aid in the accurate assessment of brain tumors,and ultimately improve clinical diagnosis.The proposed network has the potential to become a valuable tool for radiologists and medical practitioners in the field of neuro-oncology.