Privacy-Preserving Federated Learning for Collaborative Risk Monitoring Across Financial Institutions: Balancing Regulatory Compliance and Intelligence Sharing

Authors

  • Minju Zhong Department of Analytics, University of Chicago, Chicago, USA Author

Keywords:

Federated Learning, Financial Privacy, Differential Privacy, Risk Monitoring

Abstract

Financial institutions today face growing pressure to balance data privacy protection with the sharing of risk intelligence across organizations. This paper offers an in-depth analysis of how privacy-preserving federated learning techniques can be applied to cross-institutional financial risk monitoring. At the core of the proposed framework is the integration of differential privacy mechanisms with federated averaging algorithms, enabling multiple financial institutions to collaboratively train fraud-detection models without exposing sensitive customer data. Experimental evaluations on synthetic financial transaction datasets show that the framework achieves 94.7% detection accuracy under a configured differential privacy budget (ε = 1.0), with privacy accounting across training rounds as described in Section 3.3. By applying the combined sparsification and quantization strategy, the total communication volume decreases by 97.2% relative to the uncompressed baseline, while retaining 98.9% of the baseline accuracy (Table 3). This research provides practical guidance for financial institutions seeking to adopt privacy-preserving collaborative analytics that meet regulatory requirements, such as the Gramm-Leach-Bliley Act.

Downloads

Published

2026-04-02

Issue

Section

Articles