Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (2)
  • Open Access

    ARTICLE

    Improving Transferable Targeted Adversarial Attack for Object Detection Using RCEN Framework and Logit Loss Optimization

    Zhiyi Ding, Lei Sun*, Xiuqing Mao, Leyu Dai, Ruiyang Ding

    CMC-Computers, Materials & Continua, Vol.80, No.3, pp. 4387-4412, 2024, DOI:10.32604/cmc.2024.052196 - 12 September 2024

    Abstract Object detection finds wide application in various sectors, including autonomous driving, industry, and healthcare. Recent studies have highlighted the vulnerability of object detection models built using deep neural networks when confronted with carefully crafted adversarial examples. This not only reveals their shortcomings in defending against malicious attacks but also raises widespread concerns about the security of existing systems. Most existing adversarial attack strategies focus primarily on image classification problems, failing to fully exploit the unique characteristics of object detection models, thus resulting in widespread deficiencies in their transferability. Furthermore, previous research has predominantly concentrated on… More >

  • Open Access

    REVIEW

    Ensuring User Privacy and Model Security via Machine Unlearning: A Review

    Yonghao Tang1, Zhiping Cai1,*, Qiang Liu1, Tongqing Zhou1, Qiang Ni2

    CMC-Computers, Materials & Continua, Vol.77, No.2, pp. 2645-2656, 2023, DOI:10.32604/cmc.2023.032307 - 29 November 2023

    Abstract As an emerging discipline, machine learning has been widely used in artificial intelligence, education, meteorology and other fields. In the training of machine learning models, trainers need to use a large amount of practical data, which inevitably involves user privacy. Besides, by polluting the training data, a malicious adversary can poison the model, thus compromising model security. The data provider hopes that the model trainer can prove to them the confidentiality of the model. Trainer will be required to withdraw data when the trust collapses. In the meantime, trainers hope to forget the injected data More >

Displaying 1-10 on page 1 of 2. Per Page