ICP8 - SaranAkkiraju/Python_and_Deep_Learning_Programming_ICP GitHub Wiki

Objective

  • Use the use case in the class: Add a more dense layer to the existing code and check how the accuracy changes.
  • Change the data source to Breast Cancer dataset

Adding Dense Layers

1

  • Importing the necessary libraries in Python which are NumPy, Pandas, Keras and Sklearn.

2

  • Reading the CSV as a pandas data frame.
  • Splitting the data into test and train.
  • 25% of the dataset in test and remaining is training split.

3

  • Initializing the Sequential Model.
  • we used hidden layers and one output layer.
  • Hyperparameters used are Number of neurons: 20,30 Input Dim: 8, Activation: Relu and Activation for output: Sigmoid.
  • Iterated through the no of neurons list and added a new layer for each of it with Activation: Relu.

4

  • Accuracy with one hidden layer, we got around 63%

Changing Data Source - Breast Cancer dataset

5

  • Importing the necessary libraries in Python which are Pandas, Keras and Sklearn.

6

  • Reading the CSV as a pandas data frame.

7

  • Dividing the data source into X and Y variables.

  • In the given dataset Target Variable is the diagnosis. So the Y variable will be Diagnosis.

  • Slicing the features for x variables and y variable accordingly.

  • Dropping the NAN column from the X data frame as it is not significant for the model.

  • In the given dataset the target variable is in Categorical from but we would be requiring it in the numerical form in order to train the model. So using the Lambda function mapped the 'M' to 0 and 'B' to 1. Also, the verified the value counts to the before and after.

8

  • Splitting the data into test and train.
  • 25% of the dataset in test and remaining is training split.
  • Initializing the Sequential Model.
  • We used only one hidden layer and one output layer.
  • Hyperparameters used are Number of neurons: 30, Input Dim: 30, Activation: Relu and Activation for output: Sigmoid

9

  • Found out the binary_crossentropy loss as 31 and Accuracy with one hidden layer, we got around 90%