Abstract
Federated learning provides data privacy protection by keeping data used
for clients’ machine learning training private, and only sending model
parameters updates to the centralised server/aggregator. However, the
federated learning framework is still vulnerable to various attacks,
such as data poisoning, launched by malicious/compromised clients.
Cautious clients participating in federated learning, on the other hand,
employ privacy protection techniques such as differential privacy to
keep their model updates safe from inference attacks launched by the
centralised aggregator. An aggregator thus needs to employ techniques to
differentiate between model updates from benign, malicious and cautious
clients, and to mitigate the effects of updates from clients other than
benign clients. To reach this goal, we propose a novel federated
learning system called FLAP which is robust against attacks launched by
malicious clients and privacy protections employed by cautious
clients.