Skip to content

some adversarial attacks implemented on different ml models to see their effect.

License

Notifications You must be signed in to change notification settings

Shahriar-0/Adversarial-Machine-Learning

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Adversarial-Machine-Learning

Adversarial Attacks

Adversarial attacks are a way to fool machine learning models by adding small perturbations to the input data. These perturbations are usually imperceptible to humans but can cause the model to misclassify the input data. There are many different types of adversarial attacks, each with its own strengths and weaknesses. Some common types of adversarial attacks include:

  • Fast Gradient Sign Method (FGSM): This attack works by computing the gradient of the loss function with respect to the input data and then adding a small perturbation in the direction that maximizes the loss. This attack is fast and effective, but it is also easy to defend against.

  • Projected Gradient Descent (PGD): This attack is similar to FGSM, but it iteratively applies small perturbations to the input data until the model misclassifies it. This attack is more powerful than FGSM but also more computationally expensive.

List of attacks implemented

Contributors

About

some adversarial attacks implemented on different ml models to see their effect.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published