Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (3)
  • Open Access

    REVIEW

    Next-Generation Lightweight Explainable AI for Cybersecurity: A Review on Transparency and Real-Time Threat Mitigation

    Khulud Salem Alshudukhi1,*, Sijjad Ali2, Mamoona Humayun3,*, Omar Alruwaili4

    CMES-Computer Modeling in Engineering & Sciences, Vol.145, No.3, pp. 3029-3085, 2025, DOI:10.32604/cmes.2025.073705 - 23 December 2025

    Abstract Problem: The integration of Artificial Intelligence (AI) into cybersecurity, while enhancing threat detection, is hampered by the “black box” nature of complex models, eroding trust, accountability, and regulatory compliance. Explainable AI (XAI) aims to resolve this opacity but introduces a critical new vulnerability: the adversarial exploitation of model explanations themselves. Gap: Current research lacks a comprehensive synthesis of this dual role of XAI in cybersecurity—as both a tool for transparency and a potential attack vector. There is a pressing need to systematically analyze the trade-offs between interpretability and security, evaluate defense mechanisms, and outline a… More >

  • Open Access

    ARTICLE

    Adversarial Defense Technology for Small Infrared Targets

    Tongan Yu1, Yali Xue1,*, Yiming He1, Shan Cui2, Jun Hong2

    CMC-Computers, Materials & Continua, Vol.81, No.1, pp. 1235-1250, 2024, DOI:10.32604/cmc.2024.056075 - 15 October 2024

    Abstract With the rapid development of deep learning-based detection algorithms, deep learning is widely used in the field of infrared small target detection. However, well-designed adversarial samples can fool human visual perception, directly causing a serious decline in the detection quality of the recognition model. In this paper, an adversarial defense technology for small infrared targets is proposed to improve model robustness. The adversarial samples with strong migration can not only improve the generalization of defense technology, but also save the training cost. Therefore, this study adopts the concept of maximizing multidimensional feature distortion, applying noise… More >

  • Open Access

    ARTICLE

    Adversarial Attack-Based Robustness Evaluation for Trustworthy AI

    Eungyu Lee, Yongsoo Lee, Taejin Lee*

    Computer Systems Science and Engineering, Vol.47, No.2, pp. 1919-1935, 2023, DOI:10.32604/csse.2023.039599 - 28 July 2023

    Abstract Artificial Intelligence (AI) technology has been extensively researched in various fields, including the field of malware detection. AI models must be trustworthy to introduce AI systems into critical decision-making and resource protection roles. The problem of robustness to adversarial attacks is a significant barrier to trustworthy AI. Although various adversarial attack and defense methods are actively being studied, there is a lack of research on robustness evaluation metrics that serve as standards for determining whether AI models are safe and reliable against adversarial attacks. An AI model’s robustness level cannot be evaluated by traditional evaluation… More >

Displaying 1-10 on page 1 of 3. Per Page