Lab Assignment 4 - sirisha1206/Python GitHub Wiki

Python LAB Assignment - 4

Name: Vinay Santhosham Class ID: 28 Team : 4 Tech Partner: Naga Sirisha Sunkara Class ID:34

CNN Datasets:

  1. Eco hotel data
  2. Sentimental label data

1. Implement the text classification with CNN model, with a new dataset which is not used in the class

Hyper parameters:

Filter Size : 3,4,5

Optimizer : RMS Prop Optimizer

Number of Filters : 32

Dropout Probability : 0.25

Batch size : 64

Number of epochs : 100

Output:

2018-07-27T11:07:08.333553: step 300, loss 0.0975096, acc 0.941176

Filter Size : 3,4,5

Optimizer : Adam Optimizer

Number of Filters : 32

Dropout Probability : 0.25

Batch size : 64

Number of epochs : 100

Output:

2018-07-27T11:33:00.207615: step 300, loss 0.34742, acc 0.823529

Filter Size : 3,4,5

Optimizer : Adagrad Optimizer

Number of Filters : 32

Dropout Probability : 0.25

Batch size : 64

Number of epochs : 100

Output: 2018-07-27T11:36:44.425907: step 300, loss 0.541423, acc 0.764706

Filter Size : 3,4,5

Optimizer : Gradient descent Optimizer

Number of Filters : 32

Dropout Probability : 0.25

Batch size : 64

Number of epochs : 100

Output:

Evaluation:

2018-07-27T11:40:51.604835: step 300, loss 0.471642, acc 0.823529

Filter Size : 1,2,3

Optimizer : Gradient descent Optimizer

Number of Filters : 64

Dropout Probability : 0.125

Batch size : 32

Number of epochs : 50

Output:

Evaluation:

2018-07-27T11:44:25.826457: step 300, loss 0.555298, acc 0.705882

Filter Size : 1,2,3

Optimizer : Adam Optimizer

Number of Filters : 64

Dropout Probability : 0.125

Batch size : 32

Number of epochs : 50

Output:

Evaluation:

2018-07-27T11:47:46.011735: step 300, loss 0.179279, acc 0.882353

Filter Size : 1,2,3

Optimizer : Adagrad Optimizer

Number of Filters : 64

Dropout Probability : 0.125

Batch size : 32

Number of epochs : 50

Output:

Evaluation:

2018-07-27T11:50:11.530652: step 300, loss 0.782413, acc 0.529412

Filter Size : 1,2,3

Optimizer : RMS Prop Optimizer

Number of Filters : 64

Dropout Probability : 0.125

Batch size : 32

Number of epochs : 50

Output:

Evaluation:

2018-07-27T11:53:29.406375: step 300, loss 0.0986688, acc 1

Tensor Board Graph and Summaries:

RNN

Datasets:

  1. Eco hotel data
  2. Sentimental label data

2. Implement the text classification with RNN/LSTM model, with a new dataset which is not used in the class

Hyper parameters:

Drop Out Probability : 0.25

Batch size: 64

Number of epochs: 100

Optimizer: RMS Prop Optimizer

Output:

Evaluation:

2018-07-27T19:50:37.805353: step 200, loss 0.37526, acc 0.909091

Drop Out Probability : 0.25

Batch size: 64

Number of epochs: 100

Optimizer: Adam Optimizer

Output:

Evaluation: 2018-07-27T19:51:58.919084: step 200, loss 0.561634, acc 0.909091

Drop Out Probability : 0.25

Batch size: 64

Number of epochs: 100

Optimizer: Adagrad Optimizer

Output:

Evaluation: 2018-07-27T19:52:14.095679: step 200, loss 4.38106, acc 0.727273

Drop Out Probability : 0.25

Batch size: 64

Number of epochs: 100

Optimizer: Gradient descent Optimizer

Output:

Evaluation: 2018-07-27T19:52:29.014433: step 200, loss 2.02515, acc 0.818182

Drop Out Probability : 0.125

Batch size: 32

Number of epochs: 50

Optimizer: RMS Prop Optimizer

Output:

Drop Out Probability : 0.125

Batch size: 32

Number of epochs: 50

Optimizer: Adam Optimizer

Output:

Drop Out Probability : 0.125

Batch size: 32

Number of epochs: 50

Optimizer: Ada grad

Output:

Drop Out Probability : 0.125

Batch size: 32

Number of epochs: 50

Optimizer: Gradient descent

Output:

Tensorboard scalar and graphs:

3. Compare the results of CNN and RNN/LSTM models, for the text classification (same dataset for 2 models to compare) and describe, which model is best for the text classification based on your results

Datasets used:

  1. Eco hotel data
  2. Sentimental label data

CNN parameters:

Filter Size : 3,4,5

Number of Filters : 32

Dropout Probability : 0.25

Batch size : 64

Number of epochs : 100

RNN parameters:

Drop Out Probability : 0.25

Batch size: 64

Number of epochs: 100

Optimizer

RMS Prop

Output:

CNN : 2018-07-27T11:07:08.333553: step 300, loss 0.0975096, acc 0.941176

Evaluation: RNN : 2018-07-27T19:50:37.805353: step 200, loss 0.37526, acc 0.909091

Adam

CNN: Output:

2018-07-27T11:33:00.207615: step 300, loss 0.34742, acc 0.823529

RNN

Evaluation:

2018-07-27T19:51:58.919084: step 200, loss 0.561634, acc 0.909091

Ada grad

CNN

Output:

2018-07-27T11:36:44.425907: step 300, loss 0.541423, acc 0.764706

RNN

Evaluation:

2018-07-27T19:52:14.095679: step 200, loss 4.38106, acc 0.727273

Gradient descent

CNN

Evaluation:

2018-07-27T11:40:51.604835: step 300, loss 0.471642, acc 0.823529

RNN Evaluation: 2018-07-27T19:52:29.014433: step 200, loss 2.02515, acc 0.818182

From the table above CNN model is best for text classification. In CNN, RMS Prop optimizer is best for text classification.

4. Implement the image classification with CNN model, with a new dataset which is not used in the class

(E.g. CIFAR 10 dataset)

Code for CNN model:

Optimizer Used: RMS Prop

Output for CNN Model:

Ada grad optimizer

Output:

step 0, training accuracy 0.06

step 100, training accuracy 0.4

step 200, training accuracy 0.48

step 300, training accuracy 0.64

step 400, training accuracy 0.64

test accuracy 0.6137

Time for building convnet:

103051

Adam optimizer

step 0, training accuracy 0.22

step 100, training accuracy 0.8

step 200, training accuracy 0.82

step 300, training accuracy 0.84

step 400, training accuracy 0.76

test accuracy 0.8989

Time for building convnet: 95280

TensorBoard Graph:

Youtube Links: Part 1: https://youtu.be/lceLXh3cTis Part 2: https://youtu.be/YCA4Bx5PCSc