TensorFlow2 - vidyasekaran/current_learning GitHub Wiki

Excerpts from tensorflow 2 advanced practicals

https://setosa.io/#/

Feature Detector - Convolution in action - (Convolution meaning -a thing that is complex and difficult to follow) https://setosa.io/ev/image-kernels/

Sparsity is the condition of not having enough of something. from the picture image if value is lesser than 0 you apply 0 there..

How you represent CNN - We start with convolutions - we apply RELU - then Download Sampling or Max Pooling - Flatten it up - then feed it to our Feed Forward Neural Network.

Section 2 - Review of ANN and CNN

ANN & CNN in Action

Section 1 & 2: Applying Convolution to figure out information from image i.e image classification/ number classification - With CNN we get the features out of an image from main image we derive multiple images such as a blurred image, sharpened image, dark image, etc etc..by appliying convolution.

https://www.cs.ryerson.ca/~aharley/vis/conv/flat.html (Check this out for CNN Number recognition in action) We apply in stage

  1. Input Layer --> we apply Convolution in a layer here --> WE apply Compression in next layer (DownSampling)

  2. Layer1 : Input Layer

  3. Layer2 : Apply Convolution

  4. Layer3 : Apply Downsampling meaning Compress

  5. Layer4 : Apply another Convolution from LAyer 3 which is extracting features...

  6. Layer5 : apply another downlsampleing (compression on image)

  7. Layer6 : Flatten matrix to linear one line values i.e Fully Connected Layer

  8. Layer7 : Flatten matrix to linear one line values i.e Fully Connected Layer

  9. Layer8 : Guess the number here...

You improve accuracy by adding more feature dectectors or filters or add a drop out (meaning dropping a neuron with its weight - reason for this is it develop dependecny with previous layers and it perform poorly in production)

Section 15. Epochs - if we take supervised learning as example -we train the system and if it predicts value Y we call it Y (hat) and reapply this to the network as input again - we do it again and again.

Training Data - input data to train system.

Testing Data - Only after training we will do testing data and the system should never have seen this data..

The Data we divide into 3 (50% training, 25% validation and 25% testing data) -

Training data is used for gradient calculation and weight update. Validation data - we apply this data to system and for cross validation to access training data quality. Cross validation is implemented to avoid overfitting (when we apply testing data system performs badly). the system has to generalize instead of learning too much intricate and unable to generalize... When applying validation if we see both training error and validation error going down it is a good sign. meaning the system is learning.

SEction 16 : https://playground.tensorflow.org (Visual representation as how to build and train CNN and ANN)

Section 17: Gradient Dissent

Install TensorFlow 2.0

!pip install tensorflow-gpu==2.0.0.alpha0

Jyputer notebook shortcuts

Ctrl + Enter

  1. Eager execution

import tensorflow as tf x=tf.Variable(3) y=tf.Variable(5) z=tf.add(x,y) print("The sum of x and y is:",z)

  1. Default Keras API

Keras are same as

import tensorflow as tf fashion_mnist=tf.keras.datasets.fashion_mnist

(train_images,train_labels), (test_images,test_labels)=fashion_mnist.load_data()

  1. TensorBoard (dashboard shows performance - bootlenecks,loss, accuracny)

tensorboard_callback=tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1) model.fit(train_images, train_labels, epochs=5,callback=[tensorboard_callback]

Google Colab

Same like Jyputer notebooks runs in GCP. You have ability to select GPU, TPU etc

You can build a model and train in google colabs

Once coding is done u can save it in google drive or git

Install google colab

search google colab in google and click open and write code

google colab

You can add +Code or +Text (Text meaning u can add documentation) and we can save it in google drive but we need to mount it. Advantage in mouting google drive is that we can load data files from google drive aswell.

From Windowns click Win and select jupyter neworks and put url in browser and go ahead

you have menus such as File - Edit - View - Insert - Runtime - Tools or search google colab in google and goahead

You can select python version , GPU, TPU etc

Runtime - run all

Edit - clear all outputs

10 . Eager Execution