ICP_11 DeepLearning - acvc279/Python_Deeplearning GitHub Wiki

VIDEOLINK: https://drive.google.com/file/d/1BWwMgm9TUF-KyxcSdWI8ihSqPub5Eda5/view?usp=drivesdk

Q1:Follow the instruction below and then report how the performance changed.(apply all at once)

* •Convolutional input layer, 32 feature maps with a size of 3×3 and a rectifier activation function.

* •Dropout layer at 20%.

* •Convolutional layer, 32 feature maps with a size of 3×3 and a rectifier activation function.

* •Max Pool layer with size 2×2.•Convolutional layer, 64 feature maps with a size of 3×3 and a rectifier activation function.

* •Dropout layer at 20%.•Convolutional layer, 64 feature maps with a size of 3×3 and a rectifier activation function.

* •Max Pool layer with size 2×2.

* •Convolutional layer, 128feature maps with a size of 3×3 and a rectifier activation function.

* •Dropout layer at 20%.

* •Convolutional layer,128 feature maps with a size of 3×3 and a rectifier activation function.

* •Max Pool layer with size 2×2.

* •Flatten layer.•Dropout layer at 20%.

* •Fully connected layer with 1024units and a rectifier activation function.

* •Dropout layerat 20%.

* •Fully connected layer with 512units and a rectifier activation function.

* •Dropoutlayer at 20%.

* •Fully connected output layer with 10 units and a Softmax activation functionDid the performance change?

Imported all the required libraries.Load data cifar10.load_data(). Normalize the inputs. Then create a sequentional model and apply all in to it. Then compile and fit the model:

  • history=model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=epochs, batch_size=32)
  • Before adding the above layers we executed source code with the accuracy of 70.41% output before adding layers: After adding the above layers we got the accuracy of 79.02% here is the output:

After adding more layers the accuracy got increased from 70.41% to 79.02%.

Q3:Visualize Loss and Accuracy using the history object.

After Completing of first task visualize the data in the form of graph for loss and accuraccy.

Q2(Bonus Question):Predict the first 4 images of the test data using the above model. Then, compare withthe actual label for those 4 images to check whether or not the model has predicted correctly.

we imported all the libraries. and load the data which we saved as model.h5. Normalize the inputs

X_test = X_test.astype('float32')
X_train = X_train / 255.0
X_test = X_test / 255.0

pass the model for predection and predect and compare the image by using :

    prediction_arr, original_label, image = prediction_arr, original_label[i], image[i]
    predicted_label = np.argmax(prediction_arr)
    plt.xlabel(labels[int(original_label)])
    print('Original label : ', y_test[i], ' -->', labels[int(original_label)] )
    print('Predicted label: ' , data.predict_classes(image.reshape(1, 32, 32, 3)), ' -->', f"{labels[int(predicted_label)]}")
    print('Detected image as' , labels[int(original_label)], 'with an accuraccy of' , f"{100*np.max(prediction_arr):2.0f}%")
    plt.imshow(image)
    plt.show()

Output:

Learned from these icp: CNN