Additive logistic mechanism for privacy-preserving self-supervised learning

Date

2022-05

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

Self-supervised learning algorithms are vulnerable to privacy attacks, especially in the stage in which a pre-trained model is fine-tuned for the actual task. We focus on self-supervised learning applied to neural networks and design a post-training privacy-protection algorithm for the neural networks. We introduce a differential privacy mechanism, named additive logistic mechanism, which adds noise sampled from a logistic distribution to the fine-tuned layer-weights of the networks. A unique feature of the protection algorithm is allowing for post-training adjustments on the privacy parameters and alleviating the need for retraining. We apply membership inference attacks on both unprotected and protected models to quantify the trade-off between the model’s privacy and performance. We prove that the post-training protection algorithm is differentially private and empirically show that this protection can achieve a low differential privacy loss of epsilon < 1 while maintaining a performance loss of less than 5%.

Description

LCSH Subject Headings

Citation