Wiki Report for ICP 11 - NagaSurendraBethapudi/Python-ICP GitHub Wiki
https://drive.google.com/file/d/1R8FtRQP7L13345J-XSAmIdweB8PDs2gH/view?usp=sharing
Video Link :Question 1 :
By using the source code in https://umkc.box.com/s/0jkz2eljon8v374xgy1f6b4ooni0bx3t , Follow the instruction below and then report how the performance changed.
- Applied all of them
- Convolutional input layer, 32 feature maps with a size of 3×3 and a rectifier activation function.
model.add(Conv2D(32, (3, 3), input_shape=(32, 32, 3), padding='same', activation='relu', kernel_constraint=maxnorm(3)))
- Dropout layer at 20%.
model.add(Dropout(0.2))
- Convolutional layer, 32 feature maps with a size of 3×3 and a rectifier activation function.
model.add(Conv2D(32, (3, 3), activation='relu', padding='same', kernel_constraint=maxnorm(3)))
- Max Pool layer with size 2×2.
model.add(MaxPooling2D(pool_size=(2, 2)))
- Convolutional layer, 64 feature maps with a size of 3×3 and a rectifier activation function.
model.add(Conv2D(64, (3, 3), activation='relu', padding='same', kernel_constraint=maxnorm(3)))
- Dropout layer at 20%.
model.add(Dropout(0.2))
- Convolutional layer, 64 feature maps with a size of 3×3 and a rectifier activation function.
model.add(Conv2D(64, (3, 3), activation='relu', padding='same', kernel_constraint=maxnorm(3)))
- Max Pool layer with size 2×2.
model.add(MaxPooling2D(pool_size=(2, 2)))
- Convolutional layer, 128feature maps with a size of 3×3 and a rectifier activation function.
model.add(Conv2D(128, (3, 3), activation='relu', padding='same', kernel_constraint=maxnorm(3)))
- Dropout layer at 20%.
model.add(Dropout(0.2))
- Convolutional layer,128 feature maps with a size of 3×3 and a rectifier activation function.
model.add(Conv2D(128, (3, 3), activation='relu', padding='same', kernel_constraint=maxnorm(3)))
- Max Pool layer with size 2×2.
model.add(MaxPooling2D(pool_size=(2, 2)))
- Flatten layer.
model.add(Flatten())
- Dropout layer at 20%.
model.add(Dropout(0.2))
- Fully connected layer with 1024units and a rectifier activation function.
model.add(Dense(1024, activation='relu', kernel_constraint=maxnorm(3)))
- Dropout layerat 20%.
model.add(Dropout(0.2))
- Fully connected layer with 512units and a rectifier activation function.
model.add(Dense(512, activation='relu', kernel_constraint=maxnorm(3)))
- Dropoutlayer at 20%.
model.add(Dropout(0.2))
- Fully connected output layer with 10 units and a Softmax activation function
model.add(Dense(10, activation='softmax'))
Accuracy and Loss of the model after adding layers
- Loss of the model is decreased by adding more layers.
- Accuracy of the model is increased by adding more layers.
Question 2 :
Predict the first 4 images of the test data using the above model. Then, compare with the actual label for those 4 images to check whether or not the model has predicted correctly.
Explanation :
By using the above model and by using below logic predicted the labels
# Predict function will predict the number of images as per the request by user
def predict(x):
for i in range(x):
image = X_test[i] #assigning images to image
plt.imshow(image)
plt.show() #displaying the image
print('Actual label : ', y_test[i])
print('Predicted label : ' , model.predict_classes(image.reshape(1, 32, 32, 3)))
i = i + 1 #iterating for exit when it reaches max
predict(4) #predicting first four images
Model detected all labels correctly
Question 3 :
Visualize Loss and Accuracy using the history object.
Explanation :
plotted the loss and accuracy plots
Observations:
After adding more layers accuracy was increased and loss was decreased.
Bonus Question :
Explanation :
Saved the model and reloaded the model by using below logics
model.save('convolution_model.h5')
model = load_model('convolution_model.h5')
Changed code a little bit for finding the labels, this time in output how much percent model is predicted the image is displayed .
predictions = model.predict(X_test) #passing all the test data to the model for predictions
#Predict_image function will take below inputs:
# i - no of images to be passed
# predictions_array - no of images to be predicted
# actual label - y_test
# img - X_test
def predict_image(i, predictions_array, actual_label, img):
predictions_array, actual_label, img = predictions_array, actual_label[i], img[i]
predicted_label = np.argmax(predictions_array) #predicting the labels
plt.xlabel(labels[int(actual_label)]) #naming the xlabel with name of the label
print('Actual label : ', y_test[i], ' --- ', labels[int(actual_label)] )
print('Predicted label : ' , model.predict_classes(img.reshape(1, 32, 32, 3)), ' --- ' , f"{labels[int(predicted_label)]}")
print('Model detected the image as' , labels[int(actual_label)], 'with' , f"{100*np.max(predictions_array):2.0f}%",'accuracy')
plt.imshow(img)
plt.show()
#predict function will call the predict_image function as per the requested times
def predict(x):
for i in range(x):
predict_image(i,predictions[i], y_test, X_test)
i = i + 1
predict(4)
Output :
Learnings :
Learned about Convolution Neural Networks and also about traditional vs convolutional .
Challenges :
Every thing looks good.