loading page

Fortifying SplitFed Learning: Strengthening Resilience Against Malicious Clients
  • Ashwin Kumaar,
  • Raj Mani Shukla,
  • Amar Nath Patra
Ashwin Kumaar
Computing and Information Science, Anglia Ruskin University
Raj Mani Shukla
Computing and Information Science, Anglia Ruskin University

Corresponding Author:[email protected]

Author Profile
Amar Nath Patra
School of Computing and Information Sciences, Radford University

Abstract

This article focuses on analyzing SplitFed Learning against model poisoning vulnerability and developing methods to protect such a system against these attacks. SplitFed learning is a distributed learning paradigm where a neural network model is split between clients and the server, contrasting with traditional Federated Learning. SplitFed learning enables enhanced security and privacy of data, and clients do not need to perform heavy computation in model training, as they only need to train a part of the model. This approach ensures that the model can make precise predictions while maintaining the confidentiality of sensitive information. In addition to implementing a SplitFed Model, the paper proposes a distance-based method that can poison SplitFed Learning-based systems. Subsequently, this paper develops a novel prevention strategy based on robust statistical properties of the sample. To test the proposed methodology, as a test case, we have employed the image cell dataset of the malaria parasite. By addressing the impacts of adversarial attacks, this paper contributes to the advancement of deep learning techniques.
16 May 2024Submitted to TechRxiv
21 May 2024Published in TechRxiv