iconOpen Access

ARTICLE

A U-Net-Based CNN Model for Detection and Segmentation of Brain Tumor

Rehana Ghulam1, Sammar Fatima1, Tariq Ali1, Nazir Ahmad Zafar1, Abdullah A. Asiri2, Hassan A. Alshamrani2,*, Samar M. Alqhtani3, Khlood M. Mehdar4

1 Department of Computer Science, COMSATS University Islamabad, Sahiwal Campus, Sahiwal, 57000, Pakistan
2 Radiological Sciences Department, College of applied medical sciences, Najran University, Najran, 61441, Saudi Arabia
3 Department of Information Systems, College of Computer Science and Information Systems, Najran University, Najran, 61441, Saudi Arabia
4 Anatomy Department, Medicine College, Najran University, Najran, 61441, Saudi Arabia

* Corresponding Author: Hassan A. Alshamrani. Email: email

Computers, Materials & Continua 2023, 74(1), 1333-1349. https://doi.org/10.32604/cmc.2023.031695

Abstract

Human brain consists of millions of cells to control the overall structure of the human body. When these cells start behaving abnormally, then brain tumors occurred. Precise and initial stage brain tumor detection has always been an issue in the field of medicines for medical experts. To handle this issue, various deep learning techniques for brain tumor detection and segmentation techniques have been developed, which worked on different datasets to obtain fruitful results, but the problem still exists for the initial stage of detection of brain tumors to save human lives. For this purpose, we proposed a novel U-Net-based Convolutional Neural Network (CNN) technique to detect and segmentizes the brain tumor for Magnetic Resonance Imaging (MRI). Moreover, a 2-dimensional publicly available Multimodal Brain Tumor Image Segmentation (BRATS2020) dataset with 1840 MRI images of brain tumors has been used having an image size of 240 × 240 pixels. After initial dataset preprocessing the proposed model is trained by dividing the dataset into three parts i.e., testing, training, and validation process. Our model attained an accuracy value of 0.98 % on the BRATS2020 dataset, which is the highest one as compared to the already existing techniques.

Keywords


1  Introduction

The brain is a complex part of the human body and controls human neural activities like intelligence, memory, and consciousness. A brain tumor is an abnormal and uncontrollable growth of brain cells [1]. The brain is enclosed in our skull and the tissues are interconnected so that a little disturbance with the tissues could damage the normal cells [2,3]. In recent years, the likelihood of brain tumor possibility has been increasing. According to the American survey of 2019, over 86,000 new cases were diagnosed [4]. The tumor is diagnosed and analyzed using medical equipment, Magnetic Resonance Imaging (MRI). The MRI is a method used for imaging purposes in medicine, which uses radio waves generated by computers and magnetic fields. It mainly focuses on detailed images of tissues and organs of the human body.

The diagnosed brain tumor could be benign (non-cancerous) or malignant (cancerous). Cancer could vary in structure, size, or location, where the cells are damaged in the brain, and the cells of the lesion region could overlap the other cells [5]. Of all the central nervous system tumors, 85% to 90% are brain tumors, and 80 % of them were malignant and diagnosed as gliomas. Glioma refers to various subtypes of the basic brain tumor. It ranges from highly growing to slower-growing tumors, i.e., from heterogeneous to low-grade tumors. These types of tumors were primarily diagnosed in adults [6].

In the previous studies, it has been noted that the newly found brain tumors could be diagnosed and treated by taking help from the existed MRI techniques [7]. MRI protocols evaluate the vascularity, integrity of the blood-brain barrier, and brain tumor cellularity. These MRI protocols provide crucial data in the form of various image contrast. The typical MRI protocols used for the different protocols comprise gadolinium-enhanced T1-weighted, T1-weighted, and T2-weighted (FLAIR) [8].

The image segmentation part contains significant steps working with MRI images for brain tumor detection. The image segmentation in clinical practices was usually done on manual outlining by humans, which was a challenging task as it contains slice-by-slice processes, and the results depend on the decision-making skill and experience of the human who is performing the task. In addition, reproducing the same results by the same person was a difficult task to achieve. But in recent years, many researchers have been working on automatic image segmentation that has led to various algorithms and has made image segmentation an easy task compared to the manual one. Despite this development in automated algorithms for the image segmentation of brain tumors, there are still many opening challenges as the brain tumor varies in regularity, their heterogeneous appearance, shape, size, and location [9,10]. According to existing research, brain tumor segmentation can be divided into supervised learning-based and unsupervised learning-based techniques. To understand a classification model, supervised learning-based methods involve the training of the data and contain label pairs, by which segmentation and classification of new instances occur. In contrast, the unsupervised learning-based technique clusters the data, utilized for brain tumor segmentation built on various similarities [11].

In recent studies, the interest of researchers has been attracted by the term supervised deep CNN. This machine learning method automatically learns the complicated features directly from the data, despite the old conventional supervised machine learning method, which depended on hand-crafted features [12]. Various convolutional layers are involved by working with deep CNN, which helps convolve an image more adaptive and robust for multiple models. CNN is a primary type of Artificial Neural Network (ANN) [13], introduced to perform the processing and recognition of images and specially designed to treat pixel data.

In this proposed work, a deep U-net-based CNN machine learning heuristic detection and brain tumor segmentation have been developed. The proposed study of U-net-based architecture consists of two paths: an encoding path (down-sampling) and the second is a decoding path (up-sampling).

The reaming paper is prepared as a combination of four sections known as related work provides the most recent work in the field, methodology explores the architecture of the proposed model, results section provides the results, and finally, the conclusion section explains the real purpose of submitted work.

2  Related Work

In [14], an algorithm was carried out to detect brain tumors using MRI images by applying the following steps pre-processing, feature extraction, segmentation, and image classification by applying the algorithm of CNN. The authors developed the GUI MATLAB program. This study described the eight segmentation methods, and two of them were considered suitable for MRI image segmentation, like canny edge detection and the adaptive threshold approach. The system generated a message when it could not evaluate any image because the threshold value was set to 0.75; that’s why the system’s results were expected to be 75%. The testing process was performed on a limited dataset.

In this study, the Author developed a hybrid technique by merging the K-mean and Fuzzy C-Mean (FCM) algorithms to segment the brain image [15]. The brain surface extractor performed median filters than linear filtering and skull stripping to get BSE resulting images. K-mean is better than the FCM algorithm for brain tumor segmentation, but the FCM algorithm is used to detect the cells that could not be detected by the k-mean algorithm which enhancing the results of segmentation. Especially K-mean clustering algorithm gives the incomplete detection of malignant tumors, but it sports well to a large dataset. Using FCM retains more information for the malignant tumor than k-means. This research introduced a hybrid approach of k mean and FCM.

In [16], the feature extraction technique was used to classify the malignant, normal, and benign tumors in the MRI images using Neural Network (NN). This research used the differentiation of cubic order based on seven rotations for the extraction of MRI ‘images’ features. They used the combination of knowledge of specialists and low-level image features and then pruned them to extract the parts; after that, the parts were excluded, causing trouble from the input to the NN classifier. This classifier performs well for distinguishing between tumor shapes for a few specific features. But the disadvantage was that they reduced the features from 7 to 3 selected from the AR classifier, which means this classier doesn’t provide enough good features; it gave the least support value and least confidence from the user.

In [17], a multi-staged brain tumor detection automated method and neovasculature assessment was proposed in this study by applying six stages mainly. The relative Cerebral Blood Volume (rBCV) of perfusion maps was used to classify the MRI images as low-grade and high-grade gliomas in MRI images. In this paper, the extraction was done using the KFCM (Kernelized Fuzzy C Mean) approach and the differential images method helps to segmentize the images. The rBCV was used to reveal the tumor angiogenesis. The model’s limitation requires manual correction for the registration and brain symmetry line detection, particularly when the tumor could affect brain fissures. The second limitation is that it might be impossible to determine the rBCV threshold automatically.

Another study using Deep learning techniques was proposed in [18]. The images segmentation was executed using the CNN model because the CNN model can extract the local and global features simultaneously. This advantage of CNN is used to segment the whole brain. The time complexity varied between 25 s to 3 min while segmenting the entire brain. The CNN model’s limitation is that each label segmentation was predicted separately. They get the faster model using the essential convolutional network nature and GPU’s good performance. It is 30 to 40 times faster than other proposed models.

In the study of Hao Dong et al. [19], a fully automatic segmentation model was proposed to segment the brain tumor using U-net-based CNN architecture. This was evaluated by using the BRATS2015 dataset. They compare the results with the previously used datasets of BRATS and get an efficient segmentation through the newly generated dataset. The Low Graded Glioma (LGG) and High Graded Glioma (HGG) were segmented with the five-fold scheme and generated an automatic multimodal with no manual involvement for the clinical tasks.

In this paper [20] two techniques were proposed, k-mean seed selection (KMSS) and Centroid-based seed selection (CBSS) approach, and segmentation was performed by graph cut algorithm. The distribution of intensity was exploited on both parts of the brain in the CBSS method while the distribution of intensity based on similarity was formed in the KBSS method and then the mean point is calculated as the average intensity of all clusters of the whole image. The graph cut segmentation technique was performed on these images to detect the brain tumor. Graph cut is applied on undirected graphs, pixels considered as nodes and their distance as edges. The results were compared with the Fuzzy Graph Cut (GC) technique [21] and found out the KBSS GC technique is much better in performance.

In this study [22], the modeling was performed by the saliency-detection framework based on the active principle, and a Principal Local Intensity Contrast (PLIC) was used for the visual effects. Pre-processing was performed by applying some morphological operations. The segmentation operation was performed by threshold and morphological techniques and applying median filters to improve the excellence of the image. This proposed method served better compared to others. Some extraction problems occurred due to local feature-based extraction that could be overcome using the graph cut technique.

This paper [21] performed image segmentation using a hybrid fuzzy C-mean with a graph cut algorithm. Images were used from the BRATS2018 data set. The pre-processing technique was used to get an area of interest from the images using edge detection and the inverse method. Image registration is performed before the image pre-processing because moving images of the brain are not because of cerebrospinal fluid present in the brain. The fuzzy C-mean seed selection (FCMSS) technique [23] provides an efficient method to get accurate clusters. Fuzzy c-mean seed selection and graph cut algorithm give accurate analyses and more accurate results. FCM algorithm was used with seed point selection proposed as FCMSS, proven more efficient than simple FCM algorithm. A few previous studies are briefly explained inthe Tab. 1 below

images

3  Methodology

The MRI images usually have detailed patterns (like brain tumors) in the biomedical images. These images could have irregular edges. In the study of Long et al. [25], the author proposed a hybrid architecture named skip-Architecture to characterize highly comprehensive patterns by combining the shallow layer of encoding to represent the appearance and the deep layer of decoding to show the high-quality representation [25]. The demonstrated results were provided by this method for both natural and biomedical images [26]. The U-net architecture was introduced in the study [27] to solve the problem of cell tracking.

Below is explained the overall structure of the proposed work’s methodology, which includes a detailed view of all the used parameters in U-Net based CNN model:

3.1 Dataset

In the proposed work, the BRATS2020 dataset was used to prove the efficiency and accuracy of U-Net based CNN model. Images used in the dataset are 2-dimensional images with 155 slice gaps in each image, and the size of every image is 240 × 240 pixels. The dataset is divided into training 68%, testing 12%, and validation 20% for the training of the U-Net-based CNN model. Dataset’s division is described in Tab. 2 and in the graphical view below in Fig. 1.

images

images

Figure 1: Graphical representation of division of dataset

3.2 Acquisitions of Data and Preprocessing

The BRATS2020 dataset is used for testing and evaluating purposes in the proposed model. The four modalities like T1-weighted, Seg, T2-weighted, and Fluid Attenuated Inversion Recovery (FLAIR). Each image of T1, T2, and FLAIR is co-registered with T1(enhancing contrast) of excellent resolution in Fig. 2 [28]. The images were already sampled, having an image size of 240×240. In this model, each dataset image is normalized by subtracting the mean value and dividing the standard deviation, respectively.

Moreover, the tumor is classified into four labels for segmentation purposes.

Label 0: Necrosis

Label 1: Non- enhancing tumor

Label 2: Edema

Label 3: Enhancing Tumor

For training and performance evaluation, manual segmentation is used as ground values. In our proposed study, the brain tumor regions are mainly segmented using the FLAIR images except for edema [9], which has provided effective results. Enhancing tumor is defined by using the modality of T1c (enhancing contrast). So, in that way, the proposed model offers efficient results and less clinical involvement.

imagesimages

Figure 2: Sample images of modalities used for the dataset

3.3 U-net-based Convolutional Neural Network

The proposed study is U-net based architecture, consisting of two paths: an encoding path (down-sampling) and the second is a decoding path (up-sampling), as shown in Fig. 3. The encoding path consists of 9 convolutional blocks in which every block contains 2 convolutional layers. The layers have a stride of one in each direction, rectifier function, and filter size of 3 × 3 to increase the feature maps (1 to 1024). In down-sampling, for each block other than the last block, max pooling is applied with a stride of 2 × 2. In this case, the feature maps decreased by size from 240 × 240 to 15 × 15. A deconvolutional layer is attached to each block when it starts, having a 2 × 2 stride and a filter size of 3 × 3. By doing so, the ‘ ‘feature’s size gets double in each direction while the number of feature maps decreases by 2, which increases the size of feature maps from 15 × 15 to 240 × 240. The number of feature maps of both concatenated deconvolutional feature maps and encoding path maps is reduced by two convolutional layers in each decoding block. Zero padding has been used instead of the original unit architecture so that all the convolutional layers for up-sampling and down-sampling paths hold the output dimensions. The feature map number is reduced into two by a 1 × 1 convolutional layer, which tells the background and foreground segmentation. In the network, a not fully connected layer is used.

images

Figure 3: The proposed U-Net architecture

4  Discussion and Results

In this proposed work, a fully convolutional network is introduced using U-Net, to solve the problem of brain tumor segmentation. Semantic segmentation involves both tumor segmentation and detection. This proposed work provides more elastic distorted transformation, brightness, and rigid deformation compared with existing work. It is combined with the U-Net to beautify its tumor segmentation and detection work. The previous researchers have worked on it using the BRATS2018 dataset [29], which has fewer cases of patients. So, we are using the BRATS2020 dataset to evaluate our work.

The experimental results gained by using the CNN-U Net Model for the detection and segmentation of brain tumors are described. This model was implemented in python language on a PC having GPU 6 GB GTX1060, 8th generation, corei7 with the RAM of 16 GB. The training and testing process took place as described below:

4.1 Training Process

The dataset is divided into three groups to train U-Net-based CNN, i.e., training, testing, and validation processes. The preprocessed dataset is supplied to the proposed model to gain specific results. Various parameters are used in this model, including 3,297,793 total parameters, while 3,294,849 are trainable parameters and 2,944 are non-trainable parameters.

4.2 Testing Process

Likewise, the preprocessed dataset is supplied to the proposed model using the same layout used while using the same layout during the training process. The accuracy depends upon how good the used model is. The parameters used here are already defined in the above section.

4.3 Performance Evaluation

The k-fold cross-validation technique is utilized to test the results of tumor images. Data is evaluated on the bases of three sub tumoral regions for each image as below:

1.    The whole tumor region (Including four labels 1,2,3 and 4)

2.    The core region (labels 1, 3, and 4)

3.    The tumor-enhancing Region (label 4)

The mentioned statistical values helped measure the model’s output, including the accuracy rate, categorical-cross entropy of loss function, specificity, sensitivity, dice-coefficient, and precision. The proportional measurement of correctly identified images by total identified images is sensitivity.

sensitivity=TrPTrP+FlN(1)

Accuracy is the measurement of the model’s performance and how much the model predicted correctly and could perform better than the others [30].

Accuracy=TrP+TrNTrP+TrN+FlP+FlN(2)

Precesion=TrPTrP+FlP(3)

Dice_coef=2TrPFlP+2TrP+FlN(4)

Specificity=TrNTrN+FlN(5)

where TrP represents True Positive images which were correctly identified, TrN True Negatively identified images, Incorrectly Positive identified images are represented by FlP, and Incorrectly Negatively identified images are represented by FlN. Tabs. 3 and 4 show the detailed resulted values for training and validation process respectively.

images

images

4.4 Results with U-Net-based CNN Model

Fig. 4 shows the graphical representation of validation and training loss accuracy and dice-coefficient. The values of statistics with average accuracy: 0.98, sensitivity: 0.93, and specificity: 0.99 values are shown in Tab. 5, Tabs. 3 and 4 which includes 25 iterations named as epochs, and the precision value in the third epoch is 595757568.00, which is too much higher than the other values because of the uncertain factor of underfitting. The negative values in loss are because of cross-entropy loss function adopted during the training of U-net.

imagesimages

Figure 4: Validation and training view of loss, accuracy, and dice-coefficient

images

The images in Fig. 5 below show the results for detecting the brain tumor. The red color indicates the areas where the tumor has been segmented. The model used in this proposed work outperformed tumors’ detection and segmentation. Various modalities and labels have been used to achieve the results. MRI images from the BRATS2020 dataset helped in the evaluation of the model.

images

Figure 5: Sample images for results

5  Conclusion and Future Work

In this research, a U-Net-based CNN technique is developed. In this approach, data pre-processing is carried out by using four modalities. The brain tumor is classified into four labels for segmentation purpose. This research aims the detection of brain tumors by working on a dataset with MRI images and improving the efficiency of using U-net based CNN model. Our proposed model has achieved 0.98 % accuracy and is a straightforward method. To evaluate the efficiency of this model, the BRATS 2020 dataset is used, which shows that this model has outperformed compared to the existing techniques. In the future, this proposed method will be carried out under the principles of the GNN model to detect brain tumors using different datasets so that the accuracy and precision would be improved.

Dataset: For this research article, we have used this dataset (https://www.kaggle.com/datasets/awsaf49/BraTS20-dataset-training-validation).

Funding Statement: Authors would like to acknowledge the support of the Deputy for Research and Innovation- Ministry of Education, Kingdom of Saudi Arabia for funding this research through a project (NU/IFC/ENT/01/014) under the institutional funding committee at Najran University, Kingdom of Saudi Arabia.

Conflicts of Interest: The authors declare that they have no conflicts of interest to report regarding the present study.

References

  1. S. Das, G. Nayak, L. Saba, M. Kalra, J. S. Suri et al., “An artificial intelligence framework and its bias for brain tumor segmentation: A narrative review,” Computers in Biology and Medicine, vol. 143, no. 4, pp. 1–20, 2022.
  2. Q. T. Ostrom, G. Cioffi, H. Gittleman, N. Patil, K. Waite et al., “CBTRUS statistical report: Primary brain and other central nervous system tumors diagnosed in the United States in 2012–2016,” Neuro-Oncology, vol. 21, no. 5, pp. 1–10, 2019.
  3. N. MacAulay, “Molecular mechanisms of brain water transport,” Nature Reviews Neuroscience, vol. 22, no. 6, pp. 326–344, 2021.
  4. F. Özyurt, E. Sert and D. Avcı, “An expert system for brain tumor detection: Fuzzy C-means with super resolution and convolutional neural network with extreme learning machine,” Medical Hypotheses, vol. 134, no. 1, pp. 1–8, 2020.
  5. Y. Chen, M. Dang and Z. Zhang, “Brain mechanisms underlying neuropsychiatric symptoms in “Alzheimer’s disease: A systematic review of symptom-general and-specific lesion patterns,” Molecular Neurodegeneration, vol. 16, no. 1, pp. 1–22, 2021.
  6. S. Ahuja, B. Panigrahi and T. Gandhi, “Transfer learning based brain tumor detection and segmentation using superpixel technique,” in Proc. 2020 Int. Conf. on Contemporary Computing and Applications (IC3A), Lucknow, India, pp. 244–249, 2020.
  7. J. Zhou, H. Y. Heo, L. Knutsson, P. C. van Zijl and S. Jiang, “APT-weighted MRI: Techniques, current neuro applications, and challenging issues,” Journal of Magnetic Resonance Imaging, vol. 50, no. 2, pp. 347–364, 2019.
  8. T. L. Jones, T. J. Byrnes, G. Yang, F. A. Howe, B. A. Bell et al., “Brain tumor classification using the diffusion tensor image segmentation (D-SEG) technique,” Neuro-Oncology, vol. 17, no. 3, pp. 466–476, 2015.
  9. M. Soltaninejad, G. Yang, T. Lambrou, N. Allinson, T. L. Jones et al., “Automated brain tumour detection and segmentation using superpixel-based extremely randomized trees in FLAIR MRI,” International Journal of Computer Assisted Radiology and Surgery, vol. 12, no. 2, pp. 183–203, 2017.
  10. S. Bauer, R. Wiest, L. -P. Nolte and M. Reyes, “A survey of MRI-based medical image analysis for brain tumor studies,” Physics in Medicine & Biology, vol. 58, no. 13, pp. 97–129, 2013.
  11. N. Burkart and M. F. Huber, “A survey on the explainability of supervised machine learning,” Journal of Artificial Intelligence Research, vol. 70, no. 1, pp. 245–317, 2021.
  12. S. Pereira, A. Pinto, V. Alves and C. A. Silva, “Brain tumor segmentation using convolutional neural networks in MRI images,” IEEE Transactions on Medical Imaging, vol. 35, no. 5, pp. 1240–1251, 2016.
  13. G. Goh, N. Cammarata, C. Voss, S. Carter, M. Petrov et al., “Multimodal neurons in artificial neural networks,” Distill, vol. 6, no. 3, pp. 30–50, 2021.
  14. E. F. Badran, E. G. Mahmoud and N. Hamdy, “An algorithm for detecting brain tumors in MRI images,” in Proc. The 2010 Int. Conf. on Computer Engineering & System, Cairo, Egypt, pp. 368–373, 2010.
  15. E. Abdel-Maksoud, M. Elmogy and R. Al-Awadi, “Brain tumor segmentation based on a hybrid clustering technique,” Egyptian Informatics Journal, vol. 16, no. 1, pp. 71–81, 20
  16. A. Thiyagarajan and U. Pandurangan, “Comparative analysis of classifier Performance on MR brain images,” The International Arab Journal of Information Technology, vol. 12, no. 6, pp. 772–779, 2015.
  17. P. Szwarc, J. Kawa, M. Rudzki and E. Pietka, “Automatic brain tumour detection and neovasculature assessment with multiseries MRI analysis,” Computerized Medical Imaging and Graphics, vol. 46, no. 12, pp. 178–190, 2015.
  18. M. Havaei, A. Davy, D. Warde-Farley, A. Biard, A. Courville et al., “Brain tumor segmentation with deep neural networks,” Medical Image Analysis, vol. 35, no. 1, pp. 18–31, 2017.
  19. H. Dong, G. Yang, F. Liu, Y. Mo and Y. Guo, “Automatic brain tumor detection and segmentation using U-Net based fully convolutional networks,” in Proc. Annual Conf. on Medical Image Understanding and Analysis, Edinburgh, UK, pp. 506–517, 2017.
  20. J. Dogra, S. Jain and M. Sood, “Novel seed selection techniques for MR brain image segmentation using graph cut,” Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization, vol. 8, no. 4, pp. 389–399, 20
  21. S. Mamatha, “Detection of brain tumor in MR images using hybrid fuzzy C-mean clustering with graph cut segmentation technique,” Turkish Journal of Computer and Mathematics Education, vol. 12, no. 10, pp. 4570–4577, 20
  22. M. Jian, X. Zhang, L. Ma and H. Yu, “Tumor detection in MRI brain images based on saliency computational modeling,” International Federation of Automatic Control-Papers On Line, vol. 53, no. 5, pp. 43–46, 2020.
  23. S. J. Ghoushchi, R. Ranjbarzadeh, A. H. Dadkhah, Y. Pourasad and M. Bendechache, “An extended approach to predict retinopathy in diabetic patients using the genetic algorithm and fuzzy C-means,” BioMed Research International, vol. 2021, no. 1, pp. 1–13, 2021.
  24.  M. S. Kumar, “Graph based brain network structure and brain MRI segmentation techniques,” International Journal of Recent Technology and Engineering (IJRTE), vol. 8, no. 1, pp. 1–9, 2020.
  25. E. Shelhamer, J. Long and T. Darrell, “Fully convolutional networks for semantic segmentation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 4, pp. 640–651, 2016.
  26. D. Hu, “An introductory survey on attention mechanisms in NLP problems,” in Proc. SAI Intelligent Systems Conf., London, UK, pp. 432–448, 2019.
  27. O. Ronneberger, P. Fischer and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in Proc. Int. Conf. on Medical Image Computing And Computer-Assisted Intervention, Munich, Germany, pp. 234–241, 2015.
  28. Q. Yang, N. Li, Z. Zhao, X. Fan, E. I. Chang et al., “MRI cross-modality image-to-image translation,” Scientific Reports, vol. 10, no. 1, pp. 1–18, 2020.
  29. L. Weninger, O. Rippel, S. Koppers and D. Merhof, “Segmentation of brain tumors and patient survival prediction: Methods for the BraTS, 2018 challenge,” in Proc. Int. MICCAI Brainlesion Workshop, Granada, Spain, pp. 3–12, 2018.
  30. M. Yin, J. Wortman Vaughan and H. Wallach, “Understanding the effect of accuracy on trust in machine learning models,” in Proc. the 2019 Chi Conf. on Human Factors in Computing Systems, Glasgow Scotland, Uk, pp. 1–12, 2019.

Cite This Article

R. Ghulam, S. Fatima, T. Ali, N. A. Zafar, A. A. Asiri et al., "A u-net-based cnn model for detection and segmentation of brain tumor," Computers, Materials & Continua, vol. 74, no.1, pp. 1333–1349, 2023. https://doi.org/10.32604/cmc.2023.031695


cc This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 841

    View

  • 464

    Download

  • 0

    Like

Share Link