Wiki Report for ICP8 - NagaSurendraBethapudi/Python-ICP GitHub Wiki

Video Link : https://drive.google.com/file/d/1aIbaNKQk1Lw9lGatNIEYuNLKUI5cD6pO/view?usp=sharing


Question 1:

Use the use case in the class https://umkc.box.com/s/3cvfiwc81lhgygc67deyeqs8m858lld0In :
Add more Dense layers to the existing code and check how the accuracy changes

Explanation:

  1. Imported the libraries
  2. Imported the dataset
  3. Partitioned the data into train and test data
  4. Built the sequential model my_first_nn = Sequential
  5. Input and output layers are added
my_first_nn.add(Dense(1, activation='sigmoid')) # output layer
  1. Done Compilation and printed accuracy
#Compilation 
my_first_nn.compile(loss='binary_crossentropy', optimizer='adam', metrics=['acc'])
my_first_nn_fitted = my_first_nn.fit(X_train, Y_train, epochs=100,initial_epoch=0)

#Printing the summary and accuarcy of the model
print(my_first_nn.summary())
print(my_first_nn.evaluate(X_test, Y_test))

Output with one dense layer

  1. Added one dense layers
my_first_nn.add(Dense(40, activation='relu')) #adding more hidden layers
my_first_nn.add(Dense(1, activation='sigmoid')) # output layer

Output with two dense layers

  1. Added one more dense layer
my_first_nn.add(Dense(40, activation='relu')) #adding more hidden layers
my_first_nn.add(Dense(60, activation='relu')) #adding more hidden layers
my_first_nn.add(Dense(100, activation='relu')) #adding more hidden layers
my_first_nn.add(Dense(1, activation='sigmoid')) # output layer

Output with four dense layers

Output for all the layers


Question 2 :

Change the data source to Breast Cancer dataset * available in the source folder https://umkc.box.com/s/3cvfiwc81lhgygc67deyeqs8m858lld0In and make required changes.

Explanation :

  1. Imported the libraries
  2. Imported the dataset
  3. Changed categorical data to int
convert = {"diagnosis": {"M": 0, "B": 1}}
cancer_data = df.replace(convert)
cancer_data.head()
  1. Partitioned the data into train and test data
  2. Built the sequential model my_first_nn = Sequential
  3. Input and output layers are added
my_first_nn.add(Dense(1, activation='sigmoid')) # output layer
  1. Added layers and done compilation and printed accuracy
np.random.seed(155)
my_first_nn = Sequential() # create model
my_first_nn.add(Dense(20, input_dim=29, activation='relu')) # hidden layer
my_first_nn.add(Dense(1, activation='sigmoid')) # output layer

#Performing compilation
my_first_nn.compile(loss='binary_crossentropy', optimizer='adam', metrics=['acc'])
my_first_nn_fitted = my_first_nn.fit(X_train, Y_train, epochs=100,initial_epoch=0)

#Printing Summary and Accuracy
print(my_first_nn.summary())
print(my_first_nn.evaluate(X_test, Y_test))

Accuracy with one dense layer :

Accuracy with two dense layers :


Question 3 :

Normalize the data before feeding the data to the model and check how the normalization change your accuracy (code given below).

  1. from sklearn.preprocessing import StandardScaler
  2. sc = StandardScaler()

Explanation :

  1. Imported the libraries
  2. Imported the dataset
  3. Changed categorical data to int
convert = {"diagnosis": {"M": 0, "B": 1}}
cancer_data = df.replace(convert)
cancer_data.head()
  1. Partitioned the data into train and test data
  2. Normalaized the data
SC = StandardScaler()
SC.fit(x) #Fitting the data
x_normalization = SC.transform(x)
  1. Built the sequential model my_first_nn = Sequential
  2. Input and output layers are added
my_first_nn.add(Dense(1, activation='sigmoid')) # output layer
  1. Added layers and done compilation and printed accuracy
np.random.seed(155)
my_first_nn = Sequential() # create model
my_first_nn.add(Dense(20, input_dim=29, activation='relu')) # hidden layer
my_first_nn.add(Dense(1, activation='sigmoid')) # output layer

#Performing compilation
my_first_nn.compile(loss='binary_crossentropy', optimizer='adam', metrics=['acc'])
my_first_nn_fitted = my_first_nn.fit(X_train, Y_train, epochs=100,initial_epoch=0)

#Printing Summary and Accuracy
print(my_first_nn.summary())
print(my_first_nn.evaluate(X_test, Y_test))

Accuracy before Normalization :

Accuracy after Normalization :

Conclusion :

Accuracy was increased and loss of data was decreased after Normalizing the data


Challenges:

Everything looks good