Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (46)
  • Open Access

    ARTICLE

    SwinHCAD: A Robust Multi-Modality Segmentation Model for Brain Tumors Using Transformer and Channel-Wise Attention

    Seyong Jin1, Muhammad Fayaz2, L. Minh Dang3, Hyoung-Kyu Song3, Hyeonjoon Moon2,*

    CMC-Computers, Materials & Continua, Vol.86, No.1, pp. 1-23, 2026, DOI:10.32604/cmc.2025.070667 - 10 November 2025

    Abstract Brain tumors require precise segmentation for diagnosis and treatment plans due to their complex morphology and heterogeneous characteristics. While MRI-based automatic brain tumor segmentation technology reduces the burden on medical staff and provides quantitative information, existing methodologies and recent models still struggle to accurately capture and classify the fine boundaries and diverse morphologies of tumors. In order to address these challenges and maximize the performance of brain tumor segmentation, this research introduces a novel SwinUNETR-based model by integrating a new decoder block, the Hierarchical Channel-wise Attention Decoder (HCAD), into a powerful SwinUNETR encoder. The HCAD… More >

  • Open Access

    ARTICLE

    Enhancement of Medical Imaging Technique for Diabetic Retinopathy: Realistic Synthetic Image Generation Using GenAI

    Damodharan Palaniappan1, Tan Kuan Tak2, K. Vijayan3, Balajee Maram4, Pravin R Kshirsagar5, Naim Ahmad6,*

    CMES-Computer Modeling in Engineering & Sciences, Vol.145, No.3, pp. 4107-4127, 2025, DOI:10.32604/cmes.2025.073387 - 23 December 2025

    Abstract A phase-aware cross-modal framework is presented that synthesizes UWF_FA from non-invasive UWF_RI for diabetic retinopathy (DR) stratification. A curated cohort of 1198 patients (2915 UWF_RI and 17,854 UWF_FA images) with strict registration quality supports training across three angiographic phases (initial, mid, final). The generator is based on a modified pix2pixHD with an added Gradient Variance Loss to better preserve microvasculature, and is evaluated using MAE, PSNR, SSIM, and MS-SSIM on held-out pairs. Quantitatively, the mid phase achieves the lowest MAE (98.76 ± 42.67), while SSIM remains high across phases. Expert review shows substantial agreement (Cohen’s More >

  • Open Access

    ARTICLE

    An Explainable Deep Learning Framework for Kidney Cancer Classification Using VGG16 and Layer-Wise Relevance Propagation on CT Images

    Asma Batool1, Fahad Ahmed1, Naila Sammar Naz1, Ayman Altameem2, Ateeq Ur Rehman3,4, Khan Muhammad Adnan5,*, Ahmad Almogren6,*

    CMES-Computer Modeling in Engineering & Sciences, Vol.145, No.3, pp. 4129-4152, 2025, DOI:10.32604/cmes.2025.073149 - 23 December 2025

    Abstract Early and accurate cancer diagnosis through medical imaging is crucial for guiding treatment and enhancing patient survival. However, many state-of-the-art deep learning (DL) methods remain opaque and lack clinical interpretability. This paper presents an explainable artificial intelligence (XAI) framework that combines a fine-tuned Visual Geometry Group 16-layer network (VGG16) convolutional neural network with layer-wise relevance propagation (LRP) to deliver high-performance classification and transparent decision support. This approach is evaluated on the publicly available Kaggle kidney cancer imaging dataset, which comprises labeled cancerous and non-cancerous kidney scans. The proposed model achieved 98.75% overall accuracy, with precision, More >

  • Open Access

    REVIEW

    A Systematic Review of YOLO-Based Object Detection in Medical Imaging: Advances, Challenges, and Future Directions

    Zhenhui Cai, Kaiqing Zhou*, Zhouhua Liao

    CMC-Computers, Materials & Continua, Vol.85, No.2, pp. 2255-2303, 2025, DOI:10.32604/cmc.2025.067994 - 23 September 2025

    Abstract The YOLO (You Only Look Once) series, a leading single-stage object detection framework, has gained significant prominence in medical-image analysis due to its real-time efficiency and robust performance. Recent iterations of YOLO have further enhanced its accuracy and reliability in critical clinical tasks such as tumor detection, lesion segmentation, and microscopic image analysis, thereby accelerating the development of clinical decision support systems. This paper systematically reviews advances in YOLO-based medical object detection from 2018 to 2024. It compares YOLO’s performance with other models (e.g., Faster R-CNN, RetinaNet) in medical contexts, summarizes standard evaluation metrics (e.g.,… More >

  • Open Access

    REVIEW

    The Role of Artificial Intelligence in Improving Diagnostic Accuracy in Medical Imaging: A Review

    Omar Sabri1, Bassam Al-Shargabi2,*, Abdelrahman Abuarqoub2

    CMC-Computers, Materials & Continua, Vol.85, No.2, pp. 2443-2486, 2025, DOI:10.32604/cmc.2025.066987 - 23 September 2025

    Abstract This review comprehensively analyzes advancements in artificial intelligence, particularly machine learning and deep learning, in medical imaging, focusing on their transformative role in enhancing diagnostic accuracy. Our in-depth analysis of 138 selected studies reveals that artificial intelligence (AI) algorithms frequently achieve diagnostic performance comparable to, and often surpassing, that of human experts, excelling in complex pattern recognition. Key findings include earlier detection of conditions like skin cancer and diabetic retinopathy, alongside radiologist-level performance for pneumonia detection on chest X-rays. These technologies profoundly transform imaging by significantly improving processes in classification, segmentation, and sequential analysis across… More >

  • Open Access

    REVIEW

    Advanced Feature Selection Techniques in Medical Imaging—A Systematic Literature Review

    Sunawar Khan1, Tehseen Mazhar1,2,*, Naila Sammar Naz1, Fahed Ahmed1, Tariq Shahzad3, Atif Ali4, Muhammad Adnan Khan5,*, Habib Hamam6,7,8,9

    CMC-Computers, Materials & Continua, Vol.85, No.2, pp. 2347-2401, 2025, DOI:10.32604/cmc.2025.066932 - 23 September 2025

    Abstract Feature selection (FS) plays a crucial role in medical imaging by reducing dimensionality, improving computational efficiency, and enhancing diagnostic accuracy. Traditional FS techniques, including filter, wrapper, and embedded methods, have been widely used but often struggle with high-dimensional and heterogeneous medical imaging data. Deep learning-based FS methods, particularly Convolutional Neural Networks (CNNs) and autoencoders, have demonstrated superior performance but lack interpretability. Hybrid approaches that combine classical and deep learning techniques have emerged as a promising solution, offering improved accuracy and explainability. Furthermore, integrating multi-modal imaging data (e.g., Magnetic Resonance Imaging (MRI), Computed Tomography (CT), Positron… More >

  • Open Access

    REVIEW

    Deep Learning in Biomedical Image and Signal Processing: A Survey

    Batyrkhan Omarov1,2,3,4,*

    CMC-Computers, Materials & Continua, Vol.85, No.2, pp. 2195-2253, 2025, DOI:10.32604/cmc.2025.064799 - 23 September 2025

    Abstract Deep learning now underpins many state-of-the-art systems for biomedical image and signal processing, enabling automated lesion detection, physiological monitoring, and therapy planning with accuracy that rivals expert performance. This survey reviews the principal model families as convolutional, recurrent, generative, reinforcement, autoencoder, and transfer-learning approaches as emphasising how their architectural choices map to tasks such as segmentation, classification, reconstruction, and anomaly detection. A dedicated treatment of multimodal fusion networks shows how imaging features can be integrated with genomic profiles and clinical records to yield more robust, context-aware predictions. To support clinical adoption, we outline post-hoc explainability More >

  • Open Access

    REVIEW

    Deep Multi-Scale and Attention-Based Architectures for Semantic Segmentation in Biomedical Imaging

    Majid Harouni1,*, Vishakha Goyal1, Gabrielle Feldman1, Sam Michael2, Ty C. Voss1

    CMC-Computers, Materials & Continua, Vol.85, No.1, pp. 331-366, 2025, DOI:10.32604/cmc.2025.067915 - 29 August 2025

    Abstract Semantic segmentation plays a foundational role in biomedical image analysis, providing precise information about cellular, tissue, and organ structures in both biological and medical imaging modalities. Traditional approaches often fail in the face of challenges such as low contrast, morphological variability, and densely packed structures. Recent advancements in deep learning have transformed segmentation capabilities through the integration of fine-scale detail preservation, coarse-scale contextual modeling, and multi-scale feature fusion. This work provides a comprehensive analysis of state-of-the-art deep learning models, including U-Net variants, attention-based frameworks, and Transformer-integrated networks, highlighting innovations that improve accuracy, generalizability, and computational More >

  • Open Access

    ARTICLE

    Adaptive Fusion Neural Networks for Sparse-Angle X-Ray 3D Reconstruction

    Shaoyong Hong1, Bo Yang2, Yan Chen2, Hao Quan3, Shan Liu4, Minyi Tang5,*, Jiawei Tian6,*

    CMES-Computer Modeling in Engineering & Sciences, Vol.144, No.1, pp. 1091-1112, 2025, DOI:10.32604/cmes.2025.066165 - 31 July 2025

    Abstract 3D medical image reconstruction has significantly enhanced diagnostic accuracy, yet the reliance on densely sampled projection data remains a major limitation in clinical practice. Sparse-angle X-ray imaging, though safer and faster, poses challenges for accurate volumetric reconstruction due to limited spatial information. This study proposes a 3D reconstruction neural network based on adaptive weight fusion (AdapFusionNet) to achieve high-quality 3D medical image reconstruction from sparse-angle X-ray images. To address the issue of spatial inconsistency in multi-angle image reconstruction, an innovative adaptive fusion module was designed to score initial reconstruction results during the inference stage and… More >

  • Open Access

    REVIEW

    Transformers for Multi-Modal Image Analysis in Healthcare

    Sameera V Mohd Sagheer1,*, Meghana K H2, P M Ameer3, Muneer Parayangat4, Mohamed Abbas4

    CMC-Computers, Materials & Continua, Vol.84, No.3, pp. 4259-4297, 2025, DOI:10.32604/cmc.2025.063726 - 30 July 2025

    Abstract Integrating multiple medical imaging techniques, including Magnetic Resonance Imaging (MRI), Computed Tomography, Positron Emission Tomography (PET), and ultrasound, provides a comprehensive view of the patient health status. Each of these methods contributes unique diagnostic insights, enhancing the overall assessment of patient condition. Nevertheless, the amalgamation of data from multiple modalities presents difficulties due to disparities in resolution, data collection methods, and noise levels. While traditional models like Convolutional Neural Networks (CNNs) excel in single-modality tasks, they struggle to handle multi-modal complexities, lacking the capacity to model global relationships. This research presents a novel approach for… More >

Displaying 1-10 on page 1 of 46. Per Page