ML AI Attack Families - mccright/FCCSCybersecurityInput GitHub Wiki

  • DeepFool
  • Fast gradient method
  • Basic iterative method
  • Projected gradient descent
  • Jacobian saliency map
  • Universal perturbation
  • Virtual adversarial method
  • C&W L_2 and L_inf attacks
  • NewtonFool
  • Elastic net attack
  • Spatial transformations attack
  • Query-efficient black-box attack
  • Zeroth-order optimization attack
  • Decision-based attack
  • Adversarial patch

This also represents another target for mature and advanced hostile activity. Abuse via ML/AI have been a thing for quite a while.

IBM published a suite of ML & classifier attacks and defensive methods, and some additional work on detection, and it remains an active project: https://github.com/IBM/adversarial-robustness-toolbox & https://adversarial-robustness-toolbox.readthedocs.io/en/latest/index.html

We can deduce that the field is getting normalized as NIST has recently published a "A Taxonomy and Terminology of Adversarial Machine Learning -- NISTIR 8269" https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8269-draft.pdf

The links to supporting resources throughout these materials.

There is a good high-level primer on this topic at:
"Breaking neural networks with adversarial attacks -- Are the machine learning models we use intrinsically flawed?"
https://towardsdatascience.com/breaking-neural-networks-with-adversarial-attacks-f4290a9a45aa
By Anant Jain, 02-09-2019.

There is an excellent history of this topic at:
"Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning."
https://pralab.diee.unica.it/sites/default/files/biggio18-pr.pdf
By Battista Biggio & Fabio Rolia

Also see: "Guidelines for Responsible and Human-Centered Use of Explainable Machine Learning."
By Patrick Hall, Washington, DC. ([email protected]), June 11, 2019
https://arxiv.org/pdf/1906.03533.pdf

"Proposals for model vulnerability and security. Apply fair and private models, white-hat and forensic model debugging, and common sense to protect machine learning models from malicious actors."
By Patrick Hall, March 20, 2019
https://www.oreilly.com/ideas/proposals-for-model-vulnerability-and-security

"A Taxonomy and Terminology of Adversarial Machine Learning: NIST Releases Draft NISTIR 8269 for Comment."
October 30, 2019
https://csrc.nist.gov/publications/detail/nistir/8269/draft
https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8269-draft.pdf
https://doi.org/10.6028/NIST.IR.8269-draft