Federated Learning (FL) is a privacy-centric frame-work for distributed learning where devices collaborate to develop a shared global model while keeping their raw data local. Since workers may naturally form groups based on common objectives and privacy rules, we are motivated to extend FL to such settings. As workers can contribute to multiple groups, complexities arise in understanding privacy leakage and in adhering to privacy policies. In this paper, we propose differ-ential private overlapping grouped learning (DP-OGL), which shares learning across groups through common workers. We derive formal privacy guarantees between every pair of workers under the honest-but-curious threat model with multiple group memberships. Our experiments show that DP-OGL improves privacy-utility trade-offs compared to a baseline FL system.
IEEE International Symposium on Information Theory (ISIT)
2024-07-12
2024-11-13