Research Associate Aga Khan University Hospital Karachi, PK
Introduction: The project involves using Deep Convolutional Neural Networks for brain tumor classification in both original Hyperfine Magnetic Resonance (MR) images and high-resolution images through the U-Net and ResNet50 architectures to differentiate between tumor-present and tumor-absent images.
Methods: The model efficiency is measured on accuracy, similarity Index, and DICE coefficient score. The dataset included 63 images upscaled by Enhanced Super-Resolution Generative Adversarial Networks (ESRGAN) method. To train the models, online datasets from Kaggle were imported. Input data consisted of MRI scans preprocessed to accentuate the contrast between brain tumor and the surrounding tissue. The U-Net model breaks down the probability of every pixel in the input image matching with the targeted tumor region through a SoftMax function. It also uses Binary Cross Entropy for the optimization of the model during training. The input image is fed into the U-Net model to produce the output probability map.
Results: U-Net architecture achieved an accuracy of 93% for classifying tumor presence in comparison to the closest normal brain tissue pixel, while the ResNet50 model had a accuracy at 96% in the same MR brain tumor images. For tumor segmentation, ResNet50 model attained the highest mean DICE similarity coefficient of 0.96 compared to 0.93 for U-Net. Qualitative analysis of the segmentation masks showed that ResNet50 produced smoother outlines to tumor boundaries with precision rates (0.97) compared to U-Net (0.94). U-Net reached 87% sensitivity versus 84% for ResNet50. Specificity was comparable at 91% for U-Net and 90% for ResNet50. However, accuracy remained above 90%, and DICE scores above 0.75 with upscaling.
Conclusion : Overall, the U-Net model demonstrated superior quantitative and qualitative performance for brain tumor classification, detection, and segmentation in this analysis of original and upscaled Hyperfine images.