Wiki Report for ICP 13 - NagaSurendraBethapudi/Python-ICP GitHub Wiki
https://drive.google.com/file/d/1irWo_7ymJ0b3RwI7EquM4iiKS1C4ih4v/view?usp=sharing
Video link :Question 1 & 2 :
- Add one more hidden layer to autoencoder
- Do the prediction on the test data and then visualize one of the reconstructed version of that test data. Also, visualize the same test data before reconstruction using Matplotlib
Explanation :
a. Imported the libraries b. Loaded the data
#Loading the fashion_mnist data
(x_train, y_train), (x_test, y_test) = fashion_mnist.load_data()
x_train = x_train.astype('float32') / 255. #train data
x_test = x_test.astype('float32') / 255. #test data
x_train = x_train.reshape((len(x_train), np.prod(x_train.shape[1:]))) #reshaping the train data
x_test = x_test.reshape((len(x_test), np.prod(x_test.shape[1:]))) #reshaping the test data
c. Built model
# this is our input placeholder
input_img = Input(shape=(784,))
#adding extra hidden layer
hidden_layer = Dense(encoding_dim, activation='relu')(input_img)
# "encoded" is the encoded representation of the input
encoded = Dense(encoding_dim, activation='relu')(hidden_layer)
# "decoded" is the lossy reconstruction of the input
decoded = Dense(784, activation='sigmoid')(encoded)
# this model maps an input to its reconstruction
autoencoder = Model(input_img, decoded)
# this model maps an input to its encoded representation
autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy')
d. Passed the data to the Auto encoder model and found accuracy and loss and added callback.
e. Predicted one image and could see the blur on the predicted image
f. Changed the optimizer from adadelta to adam since by using adam we are able to decode the image because adam is an extension of adadelta. ADAM perform both rescaling and smoothing the gradients based on the respective(first order or second order information)
g. predicted the data again and could see the decoded image
h. Plotted some few images using the model
i. Plotted loss and accuracy
Tried running the history with more epoches since the accuracy of data above shown is not correct beacuse train data accuracy should be more than test data. Increased epoches to 30
f. Plotted loss and accuracy again.
It is looking better.
Question 3:
Repeat the question 2 on the denoisening autoencoder
Answer :
a. Imported the libraries b. Loaded the data
#Loading the fashion_mnist data
(x_train, _), (x_test, _) = fashion_mnist.load_data()
x_train = x_train.astype('float32') / 255. #train data
x_test = x_test.astype('float32') / 255. #test data
x_train = x_train.reshape((len(x_train), np.prod(x_train.shape[1:]))) #reshaping the train data
x_test = x_test.reshape((len(x_test), np.prod(x_test.shape[1:]))) #reshaping the test data
c. Built model same as previous question. d. Passed the data to the denoise Autoencoder model and found accuracy and loss and added callback e. Predicted one of the image (noise to original)
f. changed the optimiser from adadelta to adam since by using adam we are able to Encode the image because adam is an extension of adadelta. ADAM perform both rescaling and smoothing the gradients based on the respective(first order or second order information)
g. predicted the image again and could see the encoded image.
h. Plotted some few images using the model
i. Plotted loss and accuracy
Loss of the train and test data is almost same and could see test data loss is very low and this is not looking good.
Accuracy of train data is more than the test data.
Learnings :
In this ICP we learned about
- Components of auto-encoder, architecture and its hyperparameters, applications of autoencoder .
- Data denoising and image reconstruction
- Usage of callbacks
Difficulties :
We faced issue with optimizer and accuracy, Loss plots.
- Optimizer: By using ADADELTA we are unable to get the image perfectly, so tried using ADAM since ADAM perform both rescaling and smoothing the gradients based on the respective information.
- Accuracy, Loss plots: They are not looking good at first, we added more epoches to get back to normal.