Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (2)
  • Open Access

    ARTICLE

    CrossLinkNet: An Explainable and Trustworthy AI Framework for Whole-Slide Images Segmentation

    Peng Xiao1, Qi Zhong2, Jingxue Chen1, Dongyuan Wu1, Zhen Qin1, Erqiang Zhou1,*

    CMC-Computers, Materials & Continua, Vol.79, No.3, pp. 4703-4724, 2024, DOI:10.32604/cmc.2024.049791 - 20 June 2024

    Abstract In the intelligent medical diagnosis area, Artificial Intelligence (AI)’s trustworthiness, reliability, and interpretability are critical, especially in cancer diagnosis. Traditional neural networks, while excellent at processing natural images, often lack interpretability and adaptability when processing high-resolution digital pathological images. This limitation is particularly evident in pathological diagnosis, which is the gold standard of cancer diagnosis and relies on a pathologist’s careful examination and analysis of digital pathological slides to identify the features and progression of the disease. Therefore, the integration of interpretable AI into smart medical diagnosis is not only an inevitable technological trend but… More >

  • Open Access

    ARTICLE

    Adversarial Attack-Based Robustness Evaluation for Trustworthy AI

    Eungyu Lee, Yongsoo Lee, Taejin Lee*

    Computer Systems Science and Engineering, Vol.47, No.2, pp. 1919-1935, 2023, DOI:10.32604/csse.2023.039599 - 28 July 2023

    Abstract Artificial Intelligence (AI) technology has been extensively researched in various fields, including the field of malware detection. AI models must be trustworthy to introduce AI systems into critical decision-making and resource protection roles. The problem of robustness to adversarial attacks is a significant barrier to trustworthy AI. Although various adversarial attack and defense methods are actively being studied, there is a lack of research on robustness evaluation metrics that serve as standards for determining whether AI models are safe and reliable against adversarial attacks. An AI model’s robustness level cannot be evaluated by traditional evaluation… More >

Displaying 1-10 on page 1 of 2. Per Page