loading page

Robust Federated Learning against Poisoning Attacks using Capsule Neural Networks
  • +1
  • Mohsen Rezvani,
  • Mohsen Sorkhpour,
  • Esmaeel Tahanian,
  • Mansoor Fateh
Mohsen Rezvani
Shahrood University of Technology

Corresponding Author:[email protected]

Author Profile
Mohsen Sorkhpour
Shahrood University of Technology
Author Profile
Esmaeel Tahanian
Shahrood University of Technology
Author Profile
Mansoor Fateh
Shahrood University of Technology
Author Profile


The expansion of machine learning applications across dif- ferent domains has given rise to a growing interest in tap- ping into the vast reserves of data that are being generated by edge devices. To preserve data privacy, federated learn- ing was developed as a collaboratively decentralized privacy- preserving technology to overcome the challenges of data silos and data sensibility. This technology faces certain lim- itations due to the limited network connectivity of mobile devices and malicious attackers. In addition, data samples across all devices are typically not independent and iden- tically distributed (non-IID), which presents additional chal- lenges to achieving convergence in fewer communication rounds. In this paper, we have simulated attacks, namely Byzantine, label flipping, and noisy data attacks, besides non-IID data. We proposed Robust federated learning against poisoning attacks (RFCaps) to increase safety and accelerate conver- gence. RFCaps incorporates a prediction-based clustering and a gradient quality evaluation method to prevent attack- ers from the aggregation phase by applying multiple filters and also accelerate convergence by using the highest quality gradients. In comparison to MKRUM, COMED, TMean, and FedAvg algorithms, RFCap has high robustness in the pres- ence of attackers and has achieved a higher accuracy of up to 80% on both MNIST and Fashion-MNIST datasets.
30 Aug 2023Submitted to Expert Systems
13 Sep 2023Assigned to Editor
13 Sep 2023Submission Checks Completed
19 Sep 2023Reviewer(s) Assigned
14 Oct 2023Review(s) Completed, Editorial Evaluation Pending