Mingting Liu1, Xiaozhang Liu1,*, Anli Yan1, Xiulai Li1,2, Gengquan Xie1, Xin Tang3
Journal of Information Hiding and Privacy Protection, Vol.3, No.4, pp. 181-192, 2021, DOI:10.32604/jihpp.2021.027385
- 22 March 2022
Abstract As machine learning moves into high-risk and sensitive applications
such as medical care, autonomous driving, and financial planning, how to
interpret the predictions of the black-box model becomes the key to whether
people can trust machine learning decisions. Interpretability relies on providing
users with additional information or explanations to improve model transparency
and help users understand model decisions. However, these information
inevitably leads to the dataset or model into the risk of privacy leaks. We
propose a strategy to reduce model privacy leakage for instance interpretability
techniques. The following is the specific operation process. Firstly,… More >