Lab10 - naveenanallamotu/Big-Data-Analytics-Lab-Assignments GitHub Wiki
About lab 10 assignment:
This project is about training the data set using the technology inception
1x1 Convolution ? 5x5 Convolution ? Max Pooling ? Why not all !?
The module basically acts as multiple convolution filter inputs, that are processed on the same input. It also does pooling at the same time. All the results are then concatenated. This allows the model to take advantage of multi-level feature extraction from each input. For instance, it extracts general (5x5) and local (1x1) features at the same time. I included the data sets in the named airplanes and motorbikes and cars. from these we need to extract the bottleneck first Deep Neural Networks (DNNs) are analyzed via the theoretical framework of the information bottleneck (IB) principle. We first show that any DNN can be quantified by the mutual information between the layers and the input and output variables. Using this representation we can calculate the optimal information theoretic limits of the DNN and obtain finite sample generalization bounds. The advantage of getting closer to the theoretical limit is quantifiable both by the generalization bound and by the network's simplicity. We argue that both the optimal architecture, number of layers and features/connections at each layer, are related to the bifurcation points of the information bottleneck tradeoff, namely, relevant compression of the input layer with respect to the output layer. The hierarchical representations at the layered network naturally correspond to the structural phase transitions along the information curve. We believe that this new insight can lead to new optimality bounds and deep learning algorithms.
after these we need send the output of the bottle necks to the label.py here we get the side score. output for the label.py motorbikes side (score = 0.71495) cars markus (score = 0.15052) airplanes side (score = 0.13453)
We can find the graph and histograms in the documentation folder
The LeNet5 architecture was fundamental, in particular the insight that image features are distributed across the entire image, and convolutions with learnable parameters are an effective way to extract similar features at multiple location with few parameters. At the time there was no GPU to help training, and even CPUs were slow. Therefore being able to save parameters and computation was a key advantage. This is in contrast to using each pixel as a separate input of a large multi-layer neural network. LeNet5 explained that those should not be used in the first layer, because images are highly spatially correlated, and using individual pixel of the image as separate input features would not take advantage of these correlations.
Data Set: The data set that I have taken is of:
motor cycle airplanes car markus Which I have given as test and validation sets to the Reception layer which generated the graphs which are in Output Screen shots of the the corresponding folders of the programs