Artificial Intelligence is taking a more and more crucial role as time progresses. From healthcare to national security, the need for robust defences against these models has never been greater. The goal of this project is to demonstrate attacks and defenses for these models.
Poisoning attacks consists of a number of different types of attacks aimed at decreaseing the accuracy of AI models. Generally, these attacks can be broken down into the following categories:
Dataset Poisoning: This type of attack revolves around manipulating the datasets on which the models are trained
Algorithm Poisoning: This is where one is able to somehow alter the algorithm used to train the model.
Model Poisoning: This attack aims to directly modify the model itself.
Ethical and Societal Implications
It's easy to imagine how an AI model that has been trained improperly could intentionally target groups of individuals when it should not.
Financial Risks
AI is frequently used to identify fraud. If a bad actor is able to compromise this detection, they could commit fraud unnoticed.
National Security
As governments come to rely on AI more and more, the need to secure the systems on which they rely for national security grows in tandem.
For more information on how you can contribute or learn more, please contact Gabriel Gillott at ggillott@students.kennesaw.edu.