Robust Anomaly Detection in Images using Adversarial Autoencoders - shubham223601/Anomaly-Detection GitHub Wiki

Referenced from http://arxiv.org/abs/1901.06355

Approach described consist of two parts

  1. Usage of Adversarial auto encoders which controls the distribution of latent representation. It uses combination of reconstruction error plus the likelihood as an criteria of anomaly score.

Adversarial auto encoders consist of prior distribution p(z) in latent space. A generative model of data distribution p(x) is obtained by applying decoder to samples from imposed prior in latent space. AAE can impose any prior distribution from which samples can be drawn. Anomalies are expected to have low likelihood score or otherwise a high reconstruction error.

  1. Iterative refinement method for training sample rejection. Possible anomalies in the training set can be identified in low dimensional latent space by 1 class SVM and by rejecting the least normal observations we can increase the data quality to contain only normal images.

Mechanishm based on AAE which which manipulates training set during training by sample rejection so as to focus AE to train on normal class.

Depending on the assumed prior we can expect a separation in latent space between normal anomalous instance encodings, as AAE has smooth varying outputs for nearby points in latent space. if a rough idea of training anomalies is known then 1 class svm can be applied on latent representation searching for boundary that consist of fraction 1-alpha of whole dataset

  1. Likelihood based Anomaly Detection - AS AAE has assumed a prior distribution on latent representation p(z), for a new code vector z' = f(x'), likelihood p(z') can be used as a anomaly score.Decision threshold Tprior is defined by calculating likelihood p(f(x)) under the imposed prior for all training examples and then selecting a specific percentile in the distribution of p(f(x)) depending on the anomaly rate present in data. The new examples p(f(y))< Tprior are then classified as anomaly

  2. Iterative training set refinement. - if the AAE is trained with imposed prior , then the normal instances will cluster around mode of the prior in latent space. if anomalies are present in the training set, AAE maps them to low likelihood regions of prior. In order to identify these anomalies, 1 class SVM is applied to representation in latent space. Output of 1-class svm is a decision boundary and a list of all normal data points. All other data points can be considered anomalies and can be either removed from training set or weighted to contribute less to the overall loss than normal points

Every training sample xi is associated with a weight wi, which is used to compute the weighted reconstruction loss. AE is trained to minimize the weighted reconstruction loss, the same weight wi is used in adversarial training procedure.

A. Pretraining : AAE is trained on entire training set for a fixed number of epochs, where all weights are set to be 1
B. Refinement and Detection : 1 class svm is trained on latent space with expected anomaly rate v, which gives a set of candidate anomalies which are assigned a weight of 0 so as to remove it from training. The model is trained on this refined data for a small number of epochs. These two steps are repeated n times where with each iteration the total detected training anomalies increases. By iteratively excluding candidate anomalies the model for normal class is refined

C. Retraining : once the anomalies in the training set is identified and refined the model for normal class, the model is retrained such that reconstruction error on detected anomalies increases. This can be done by setting weights to be very small for anomalies forcing a better separability of the two class.

⚠️ **GitHub.com Fallback** ⚠️