loading page

Individualized Federated Learning Based on Model Pruning
  • +3
  • Yueying Zhou,
  • Gaoxiang Duan,
  • Tianchen Qiu,
  • Li Tian,
  • Xiaoying Zheng,
  • Yongxin Zhu
Yueying Zhou
University of Chinese Academy of Sciences, Shanghai Advanced Research Institute, Chinese Academy of Sciences
Author Profile
Gaoxiang Duan
University of Chinese Academy of Sciences, Shanghai Advanced Research Institute, Chinese Academy of Sciences
Tianchen Qiu
University of Chinese Academy of Sciences, Shanghai Advanced Research Institute, Chinese Academy of Sciences
Li Tian
University of Chinese Academy of Sciences, Shanghai Advanced Research Institute, Chinese Academy of Sciences

Corresponding Author:

Xiaoying Zheng
University of Chinese Academy of Sciences, Shanghai Advanced Research Institute, Chinese Academy of Sciences

Corresponding Author:

Yongxin Zhu
University of Chinese Academy of Sciences, Shanghai Advanced Research Institute, Chinese Academy of Sciences

Corresponding Author:

Abstract

Federated Learning serves as a distributed framework for machine learning. Traditional approaches to federated learning often assume the independence and identical distribution (IID) of client data. However, real-world scenarios frequently feature personalized characteristics in client data, deviating from the IID assumption. Additionally, challenges such as substantial communication overhead and limited resources at edge nodes hinder the practical implementation of federated learning. In response to the challenges in deploying federated learning, including uneven data distribution, communication bottlenecks, and resource limitations at edge nodes, this paper introduces an individualized federated learning framework based on model pruning. This framework effectively adapts the client's local model to the personalized distribution of local data while meeting the model aggregation requirements on the server. Utilizing sparse operations, the framework achieves personalized model pruning, efficiently compresses model parameters, and reduces computational load on edge nodes. Presently, our approach demonstrates a compression ratio of 3.8% on the non-IID dataset Feminist without compromising final training accuracy, resulting in a 12.3% acceleration in training speed.
11 Dec 2023Submitted to TechRxiv
13 Dec 2023Published in TechRxiv