ICP_9 DeepLearning - acvc279/Python_Deeplearning GitHub Wiki

VIDEO LINK: https://drive.google.com/file/d/1UuiLgp7lbMFMZXNcVu9tLe5x-1ipIaVM/view?usp=drivesdk

Q1: Plot the loss and accuracy for both training data and validation datausing the history object in the source code.

First, Import all the necessary libraries than load the test and train data mnist. Displying the first image in training data:

plt.title('Ground Truth : {}'.format(train_labels[0]))
plt.show()

Start process the data. convert each image of shape 28*28 to 784 dimensional which will be fed to the network as a single feature:

dimData = np.prod(train_images.shape[1:])
train_data = train_images.reshape(train_images.shape[0],dimData)
test_data = test_images.reshape(test_images.shape[0],dimData)

convert data to float and scale values between 0 and 1:

train_data = train_data.astype('float')
test_data = test_data.astype('float')

scale the data and then change the labels from integer to one-hot encoding. creating sequential network:

model = Sequential()
model.add(Dense(512, activation='relu', input_shape=(dimData,)))
model.add(Dense(512, activation='relu'))
model.add(Dense(10, activation='softmax'))

Compile it and evalute the accuracy and loss which is of 97% and 16%. Here is the plots of loss and accuracy:

Q2: Plot one of the images in the test data, and then do inferencing to check what is the prediction of the model on that single image.

Import libraries and then load test train data mnist. Process the data an implement the model same as Q1. Evalute the test data Which we got 98% accuracy and 14% loss. Here is the Predicted random image.

Q3:We had used 2 hidden layers and Relu activation. Try to change the number of hidden layer and the activation to tanh or sigmoid and see what happens.

Imported libraries and than Load test train data mnist. Process the data and implement model. during the implementation change the hidden layers as per question:

model.add(Dense(512, activation='relu', input_shape=(dimData,)))
model.add(Dense(512, activation='relu'))
model.add(Dense(300, activation='tanh'))
model.add(Dense(300, activation='sigmoid'))
model.add(Dense(10, activation='softmax'))

We found that The loss will reduce from 16% to 10% and also accuracy is 98%.

  • Here is The plot of Loss and accuracy.

Q4: Run the same code without scaling the imagesand check the performance?

Run the Q3 code without scaling and we obseredd that the loss is incresed from 10% to 12% and accuracy Decresed from 98% to 96%. Here is the polt of loss and accuracy.

Learned from these ICp

Basic concepts of Neral Networks.