loading page

k-IPfedAvg: k-Anonymous Integrally Private Federated Averaging with Convergence Guarantee
  • Ayush K. Varshney,
  • Vicenc Torra
Ayush K. Varshney

Corresponding Author:[email protected]

Author Profile
Vicenc Torra

Abstract

Federated Learning (FL) has established itself as a widely accepted distributed paradigm. Without sharing data, it may seem like a privacy-preserving paradigm, but recent studies have revealed vulnerabilities in weight sharing which results in information disclosure. Hence, privacy-preserving approaches must be incorporated during aggregation to avoid disclosures. In the literature of FL, not much focus has been given on generating generalized models which can be generated by multiple sets of datasets thus avoiding identity disclosure. Integrally private models are the models which recur frequently from different datasets. So, in this paper we focus on generating the integrally private global models proposing k-Anonymous Integrally Private Federated Averaging (k-IPfedAvg), a novel aggregation algorithm which clusters similar user weights to compute a global model which can be generated by multiple sets of users. Convergence analysis of k-IPfedAvg reveals a rate of O(1 T) over training epochs. Furthermore, the experimental analysis shows that k-IPfedAvg maintains a consistent level of utility across various privacy parameters in contrast to existing noise based privacypreserving mechanisms. We have compared k-IPfedAvg with classical fedAvg and its differentially private counterpart. Our results shows that k-IPfedAvg has comparable accuracy score with baseline fedAvg and outperforms DP-fedAvg on iid and non-iid distributions of MNIST, FashionMNIST and CIFAR10 datasets.
21 Dec 2023Submitted to TechRxiv
22 Dec 2023Published in TechRxiv