Research Output
Can Federated Models Be Rectified Through Learning Negative Gradients?
  Federated Learning (FL) is a method to train machine learning (ML) models in a decentralised manner, while preserving the privacy of data from multiple clients. However, FL is vulnerable to malicious attacks, such as poisoning attacks, and is challenged by the GDPR’s “right to be forgotten”. This paper introduces a negative gradient-based machine learning technique to address these issues. Experiments on the MNIST dataset show that subtracting local model parameters can remove the influence of the respective training data on the global model and consequently “unlearn” the model in the FL paradigm. Although the performance of the resulting global model decreases, the proposed technique maintains the validation accuracy of the model above 90%. This impact on performance is acceptable for an FL model. It is important to note that the experimental work carried out demonstrates that in application areas where data deletion in ML is a necessity, this approach represents a significant advancement in the development of secure and robust FL systems.

  • Date:

    31 January 2024

  • Publication Status:

    Published

  • Publisher

    Springer Nature

  • DOI:

    10.1007/978-3-031-52265-9_2

  • Funders:

    Edinburgh Napier Funded

Citation

Tahir, A., Tan, Z., & Babaagba, K. O. (2024). Can Federated Models Be Rectified Through Learning Negative Gradients?. In Big Data Technologies and Applications (18-32). https://doi.org/10.1007/978-3-031-52265-9_2

Authors

Keywords

Federated Learning, Machine Unlearning, Negative Gradients, Model Rectification

Monthly Views:

Available Documents