dc.description.abstract | Federated learning allows multiple parties to collaboratively train a model without sharing sensitive data, making it suitable for applications such as border control, criminal investigation, and device security. However, models trained in this way remain vulnerable to privacy attacks. Two notable threats are membership inference attacks (MIAs), which aim to determine whether specific samples were included in the training data, and model inversion (MI) attacks, which attempt to reconstruct training samples from the model. State-of-the-art defenses for MI include transfer learning (TL) and bidirectional dependency optimization (BiDO). For MIA, random cropping (RC) has shown a strong mitigation potential. This study investigates whether RC can be applied to a model typically used to demonstrate MI attacks (without deteriorating this models test accuracy), and how effective the combination of the three defenses is against a strong MIA attack. Each defense is applied individually, in pairs, and in full combination, with parameters fine-tuned accordingly. The models are then subjected to two state-of-the-art attacks: IF-GMI as MI attack on undefended models, and LiRA as MIA attack on defended and undefended models. Each defense configuration demonstrates a reduction in data leakage, with acceptable utility and cost. The results show that the combination of multiple defenses (TL + BiDO + RC) achieves the greatest mitigation effect against MIA, without notable degradation in performance. | |