Research Output
Practical defences against model inversion attacks for split neural networks
  We describe a threat model under which a split network-based federated learning system is susceptible to a model inversion attack by a malicious computational server. We demonstrate that the attack can be successfully performed with limited knowledge of the data distribution by the attacker. We propose a simple additive noise method to defend against model inversion, finding that the method can significantly reduce attack efficacy at an acceptable accuracy trade-off on MNIST. Furthermore, we show that NoPeekNN, an existing defensive method, protects different information from exposure, suggesting that a combined defence is necessary to fully protect private user data.

  • Type:

    Conference Paper (unpublished)

  • Date:

    07 May 2021

  • Publication Status:

    Unpublished

  • Funders:

    Edinburgh Napier Funded

Citation

Titcombe, T., Hall, A. J., Papadopoulos, P., & Romanini, D. (2021, May). Practical defences against model inversion attacks for split neural networks. Paper presented at ICLR 2021 Workshop on Distributed and Private Machine Learning (DPML 2021), Online

Authors

Monthly Views:

Available Documents