2024 Volume 32 Pages 990-1002
Local differential privacy (LDP) provides a strong privacy guarantee in a distributed setting such as federated learning (FL). When a central curator deploys local randomizers satisfying ε0-LDP, how can we confirm and measure the given privacy guarantees at clients? To answer the above question, we introduce an empirical privacy test in FL clients by measuring the lower bounds of LDP, which gives us empirical ε0 and probability that the two gradients can be distinguished. To audit the given privacy guarantees (i.e., ε0), we first discover a worst-case scenario that reaches the theoretical upper bound of LDP, which is essential to empirically materialize the given privacy guarantees. We further instantiate several adversaries in FL under LDP to observe empirical LDP at various attack surfaces. The empirical privacy test with those adversary instantiations enables FL clients to understand how the given privacy level protects them more intuitively and verify that mechanisms claiming ε0-LDP provide equivalent privacy protection. We also demonstrate numerical observations of the measured privacy in these adversarial settings, and the randomization algorithm LDP-SGD is vulnerable to gradient manipulation and a maliciously well-manipulated model. We further discuss employing a shuffler to measure empirical privacy in a collaborative way and also measuring the privacy of the shuffled model. Our observation suggests that the theoretical ε in the shuffle model has room for improvement.