Frequently asked questions
Frequently asked questions
The main focus of the project is to create a better understanding of the attacks and defenses on AI models.
Poisoning attacks are attacks on AI models that seek to undermine the model's ability to fulfill its intended purpose.
All of the modules will have a pre-lab, lab, and post-lab.
The models are hosted on this website under the page: Modules.
Gabriel Gillott: ggillott@students.kennesaw.edu
Also, you can click the Contact Us link below.