Open Access
ARTICLE
A Secured and Continuously Developing Methodology for Breast Cancer Image Segmentation via U-Net Based Architecture and Distributed Data Training
1 Department of Computer Science and Engineering, Brac University, Dhaka, 1000, Bangladesh
2 Department of Computer Science and Engineering, George Mason University, Fairfax, VA 22030, USA
3 Department of Software Engineering, College of Computer and Information Sciences, King Saud University, Riyadh, 11543, Saudi Arabia
4 AI and Big Data Department, Endicott College, Woosong University, Daejeon, 34606, Republic of Korea
* Corresponding Author: Jia Uddin. Email:
Computer Modeling in Engineering & Sciences 2025, 142(3), 2617-2640. https://doi.org/10.32604/cmes.2025.060917
Received 12 November 2024; Accepted 06 February 2025; Issue published 03 March 2025
Abstract
This research introduces a unique approach to segmenting breast cancer images using a U-Net-based architecture. However, the computational demand for image processing is very high. Therefore, we have conducted this research to build a system that enables image segmentation training with low-power machines. To accomplish this, all data are divided into several segments, each being trained separately. In the case of prediction, the initial output is predicted from each trained model for an input, where the ultimate output is selected based on the pixel-wise majority voting of the expected outputs, which also ensures data privacy. In addition, this kind of distributed training system allows different computers to be used simultaneously. That is how the training process takes comparatively less time than typical training approaches. Even after completing the training, the proposed prediction system allows a newly trained model to be included in the system. Thus, the prediction is consistently more accurate. We evaluated the effectiveness of the ultimate output based on four performance matrices: average pixel accuracy, mean absolute error, average specificity, and average balanced accuracy. The experimental results show that the scores of average pixel accuracy, mean absolute error, average specificity, and average balanced accuracy are 0.9216, 0.0687, 0.9477, and 0.8674, respectively. In addition, the proposed method was compared with four other state-of-the-art models in terms of total training time and usage of computational resources. And it outperformed all of them in these aspects.Keywords
Cite This Article

This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.