Wiki Report for ICP8 - NagaSurendraBethapudi/Python-ICP GitHub Wiki
https://drive.google.com/file/d/1aIbaNKQk1Lw9lGatNIEYuNLKUI5cD6pO/view?usp=sharing
Video Link :Question 1:
Use the use case in the class https://umkc.box.com/s/3cvfiwc81lhgygc67deyeqs8m858lld0In :
Add more Dense layers to the existing code and check how the accuracy changes
Explanation:
- Imported the libraries
- Imported the dataset
- Partitioned the data into train and test data
- Built the sequential model
my_first_nn = Sequential
- Input and output layers are added
my_first_nn.add(Dense(1, activation='sigmoid')) # output layer
- Done Compilation and printed accuracy
#Compilation
my_first_nn.compile(loss='binary_crossentropy', optimizer='adam', metrics=['acc'])
my_first_nn_fitted = my_first_nn.fit(X_train, Y_train, epochs=100,initial_epoch=0)
#Printing the summary and accuarcy of the model
print(my_first_nn.summary())
print(my_first_nn.evaluate(X_test, Y_test))
Output with one dense layer
- Added one dense layers
my_first_nn.add(Dense(40, activation='relu')) #adding more hidden layers
my_first_nn.add(Dense(1, activation='sigmoid')) # output layer
Output with two dense layers
- Added one more dense layer
my_first_nn.add(Dense(40, activation='relu')) #adding more hidden layers
my_first_nn.add(Dense(60, activation='relu')) #adding more hidden layers
my_first_nn.add(Dense(100, activation='relu')) #adding more hidden layers
my_first_nn.add(Dense(1, activation='sigmoid')) # output layer
Output with four dense layers
Output for all the layers
Question 2 :
https://umkc.box.com/s/3cvfiwc81lhgygc67deyeqs8m858lld0In and make required changes.
Change the data source to Breast Cancer dataset * available in the source folderExplanation :
- Imported the libraries
- Imported the dataset
- Changed categorical data to int
convert = {"diagnosis": {"M": 0, "B": 1}}
cancer_data = df.replace(convert)
cancer_data.head()
- Partitioned the data into train and test data
- Built the sequential model
my_first_nn = Sequential
- Input and output layers are added
my_first_nn.add(Dense(1, activation='sigmoid')) # output layer
- Added layers and done compilation and printed accuracy
np.random.seed(155)
my_first_nn = Sequential() # create model
my_first_nn.add(Dense(20, input_dim=29, activation='relu')) # hidden layer
my_first_nn.add(Dense(1, activation='sigmoid')) # output layer
#Performing compilation
my_first_nn.compile(loss='binary_crossentropy', optimizer='adam', metrics=['acc'])
my_first_nn_fitted = my_first_nn.fit(X_train, Y_train, epochs=100,initial_epoch=0)
#Printing Summary and Accuracy
print(my_first_nn.summary())
print(my_first_nn.evaluate(X_test, Y_test))
Accuracy with one dense layer :
Accuracy with two dense layers :
Question 3 :
Normalize the data before feeding the data to the model and check how the normalization change your accuracy (code given below).
- from sklearn.preprocessing import StandardScaler
- sc = StandardScaler()
Explanation :
- Imported the libraries
- Imported the dataset
- Changed categorical data to int
convert = {"diagnosis": {"M": 0, "B": 1}}
cancer_data = df.replace(convert)
cancer_data.head()
- Partitioned the data into train and test data
- Normalaized the data
SC = StandardScaler()
SC.fit(x) #Fitting the data
x_normalization = SC.transform(x)
- Built the sequential model
my_first_nn = Sequential
- Input and output layers are added
my_first_nn.add(Dense(1, activation='sigmoid')) # output layer
- Added layers and done compilation and printed accuracy
np.random.seed(155)
my_first_nn = Sequential() # create model
my_first_nn.add(Dense(20, input_dim=29, activation='relu')) # hidden layer
my_first_nn.add(Dense(1, activation='sigmoid')) # output layer
#Performing compilation
my_first_nn.compile(loss='binary_crossentropy', optimizer='adam', metrics=['acc'])
my_first_nn_fitted = my_first_nn.fit(X_train, Y_train, epochs=100,initial_epoch=0)
#Printing Summary and Accuracy
print(my_first_nn.summary())
print(my_first_nn.evaluate(X_test, Y_test))
Accuracy before Normalization :
Accuracy after Normalization :
Conclusion :
Accuracy was increased and loss of data was decreased after Normalizing the data
Challenges:
Everything looks good