Jumana Alsubhi1, Abdulrahman Gharawi1, Mohammad Alahmadi2,*
Journal of Information Hiding and Privacy Protection, Vol.3, No.4, pp. 193-200, 2021, DOI:10.32604/jihpp.2021.027871
- 22 March 2022
Abstract Nowadays, machine learning (ML) algorithms cannot succeed without
the availability of an enormous amount of training data. The data could contain
sensitive information, which needs to be protected. Membership inference
attacks attempt to find out whether a target data point is used to train a certain
ML model, which results in security and privacy implications. The leakage of
membership information can vary from one machine-learning algorithm to
another. In this paper, we conduct an empirical study to explore the performance
of membership inference attacks against three different machine learning
algorithms, namely, K-nearest neighbors, random forest, More >