Browsing by Subject "Membership inference attack"
Now showing 1 - 1 of 1
- Results Per Page
- Sort Options
Item Additive logistic mechanism for privacy-preserving self-supervised learning(2022-05) Yang, Yunhao; Topcu, UfukSelf-supervised learning algorithms are vulnerable to privacy attacks, especially in the stage in which a pre-trained model is fine-tuned for the actual task. We focus on self-supervised learning applied to neural networks and design a post-training privacy-protection algorithm for the neural networks. We introduce a differential privacy mechanism, named additive logistic mechanism, which adds noise sampled from a logistic distribution to the fine-tuned layer-weights of the networks. A unique feature of the protection algorithm is allowing for post-training adjustments on the privacy parameters and alleviating the need for retraining. We apply membership inference attacks on both unprotected and protected models to quantify the trade-off between the model’s privacy and performance. We prove that the post-training protection algorithm is differentially private and empirically show that this protection can achieve a low differential privacy loss of epsilon < 1 while maintaining a performance loss of less than 5%.