Key Findings:
Federated Learning Vulnerability:
We found that the federated learning environment was able to be attacked successfully. When we introduced the noisy device, the global model's performance declined significantly.
Trust-Based Model Averaging Efficacy:
The implementation of a trust-based model was an effective defense mechanism. The model was able to mitigate the effect of the attack, keeping a relatively low MSE.
Dynamic Trust Scores:
Because the trust scores were updated dynamically, this allowed the algorithm to identify the untrustworthy device and to stop considering its data in model updates.
Detailed Analysis:
This lab showed the scenario in which, in a federated learning environment, a model was successfully attacked. This attack introduced noisy updates to the model and caused the model's MSE to increase and performance to decrease.
When we implemented a trust-based system, however, we saw that the model was able to identify the attacking device and stop using its data in the updates.
This demonstrates the efficacy of this style of defense in real-world application.