Open Access
ARTICLE
Yichuan Liu1, Chungen Xu1,*, Lei Xu1, Lin Mei1, Xing Zhang2, Cong Zuo3
Journal of Information Hiding and Privacy Protection, Vol.3, No.4, pp. 151-164, 2021, DOI:10.32604/jihpp.2021.026944
Abstract The widespread acceptance of machine learning, particularly of neural
networks leads to great success in many areas, such as recommender systems,
medical predictions, and recognition. It is becoming possible for any individual
with a personal electronic device and Internet access to complete complex
machine learning tasks using cloud servers. However, it must be taken into
consideration that the data from clients may be exposed to cloud servers. Recent
work to preserve data confidentiality has allowed for the outsourcing of services
using homomorphic encryption schemes. But these architectures are based on
honest but curious cloud servers, which are unable to tell… More >
Open Access
ARTICLE
Pengzhi Xu1,2, Zetian Mai1,2, Yuhao Lin1, Zhen Guo1,2,*, Victor S. Sheng3
Journal of Information Hiding and Privacy Protection, Vol.3, No.4, pp. 165-179, 2021, DOI:10.32604/jihpp.2021.027280
Abstract With the increase of software complexity, the security threats faced by
the software are also increasing day by day. So people pay more and more
attention to the mining of software vulnerabilities. Although source code has rich
semantics and strong comprehensibility, source code vulnerability mining has been
widely used and has achieved significant development. However, due to the
protection of commercial interests and intellectual property rights, it is difficult to
obtain source code. Therefore, the research on the vulnerability mining technology
of binary code has strong practical value. Based on the investigation of related
technologies, this article firstly introduces the… More >
Open Access
ARTICLE
Mingting Liu1, Xiaozhang Liu1,*, Anli Yan1, Xiulai Li1,2, Gengquan Xie1, Xin Tang3
Journal of Information Hiding and Privacy Protection, Vol.3, No.4, pp. 181-192, 2021, DOI:10.32604/jihpp.2021.027385
Abstract As machine learning moves into high-risk and sensitive applications
such as medical care, autonomous driving, and financial planning, how to
interpret the predictions of the black-box model becomes the key to whether
people can trust machine learning decisions. Interpretability relies on providing
users with additional information or explanations to improve model transparency
and help users understand model decisions. However, these information
inevitably leads to the dataset or model into the risk of privacy leaks. We
propose a strategy to reduce model privacy leakage for instance interpretability
techniques. The following is the specific operation process. Firstly, the user
inputs data into… More >
Open Access
ARTICLE
Jumana Alsubhi1, Abdulrahman Gharawi1, Mohammad Alahmadi2,*
Journal of Information Hiding and Privacy Protection, Vol.3, No.4, pp. 193-200, 2021, DOI:10.32604/jihpp.2021.027871
Abstract Nowadays, machine learning (ML) algorithms cannot succeed without
the availability of an enormous amount of training data. The data could contain
sensitive information, which needs to be protected. Membership inference
attacks attempt to find out whether a target data point is used to train a certain
ML model, which results in security and privacy implications. The leakage of
membership information can vary from one machine-learning algorithm to
another. In this paper, we conduct an empirical study to explore the performance
of membership inference attacks against three different machine learning
algorithms, namely, K-nearest neighbors, random forest, support vector machine,
and logistic… More >