Skip GANomaly: Skip Connected and Adversarially Trained Encoder Decoder Anomaly Detection - shubham223601/Anomaly-Detection GitHub Wiki
Referenced from http://arxiv.org/abs/1901.08954
Approach described captures the multi scale distribution of the normal data distribution in high dimensional input space by employing encoder decoder architecture with skip connection
consist of Generator and Discriminator architecture.
-
Generator consist of encoder and decoder network where encoder captures the distribution of input data by mapping high dimensional image into low dimensional representation.
-
Decoder network upsamples the latent vector to input image dimension and reconstructs the output.Decoder uses a skip connection approach where each down sampling layer of encoder is mapped to the respective up sampling layer in decoder.The main advantage of using this skip connection is that it allows information transfer between the layers capturing both local and global information.
-
Discriminator is use to classify the real images from the fake ones which is generated by generator. Discriminator also acts as a feature extractor such that the latent representation of input image and the reconstructed image are computed.
Training comprises of three loss values:
-
Adverserial Loss - It is used to maximise the reconstruction capability of the normal images. Generator constructs the normal image as realistically as possible, where as Discriminator real and fake images.Main objective here is to minimize objective for G and Maximize objective for Decoder
-
Contextual Loss - used for capturing the data distribution of the normal samples, by using L1 normalization
-
Latent loss - Main idea is to reconstuct the latent representation of input x and of generated sample as similar as possible.
training objective of the model is to capture the distribution of training data in both image and latent vector space. capturing the dsitribution in btoh space allows model to learn higher and lower level features that are unique to images