Emerging as a promising distributed learning paradigm, federated learning (FL) has been widely adopted in many fields. Nonetheless, a big challenge for FL in real-world implementation is Byzantine attacks, where compromised clients can mislead or poison the training model by falsifying or manipulating the local model parameters. To solve the abovementioned problem, we present a novel Byzantine robust-FL scheme via reputation, dubbed FLPhish, for defending Byzantine attacks under the Ensemble Federated Learning architecture (EFL). Specifically, we first develop a novel EFL architecture that allows FL to be compatible with different deep models from different clients. Second, a phishing method for EFL is crafted to identify possible Byzantine behaviors. Third, we devise a Bayesian inference-based reputation mechanism to measure each client’s confidence level and further identify Byzantine attackers. Last, we strictly analyze how the FLPhish scheme defends against Byzantine attacks. Extensive experiments demonstrate that the proposed FLPhish achieves outperformed efficacy in defending Byzantine attacks in EFL, respectively under different fractions of Byzantine attackers and different degrees of distribution imbalance.
2023-10
2024-11-28