Technology - hammii/CatEmotion GitHub Wiki
System Architecture
Mobilenet
Mobilenet is a state-of-the-art model that reduced the time needed to train drastically by decreasing parameter size. In order to know how Mobilenet does this, we first need to see how normal CNN models work.
As you can see CNN works by having all the filters going over all the input matrices for all the channels. From this, we can calculate the number of parameters required.
Mobile net does this same process but in a different manner.
As you can see Mobilenet splits the traditional convolution process in to 2 parts. The first part is called "depthwise convolution" and is responsible for calculating the results of the convolution on all 3 channels and stacking them together. Next comes the "pointwise convolution" which adds up the results to make 1 feature map.
The below block diagram shows the process of traditional CNN models and that of Mobilenet.
You may be wondering if we split the process in half wouldn't that be inefficient and equally time staking? The below calculations show that splitting the convolution process in half reduces calculation time to 1/8 of the original process.
Mobilenet also achieves high results compared to other models as shown below.
Trained model to android
To use the trained model on android you must convert it to a .tflite
format.
This can be easily done by using google teachable to train the model or use the below code to change the format.
import tensorflow as tf
#Convert the model
converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
tflite_model = converter.convert()
#Save the model.
with open('model.tflite', 'wb') as f:
f.write(tflite_model)
To add tflte model add your file in the below code in "ClassifierQuantizedMobileNet.java".
@Override
protected String getModelPath() {
return "converted_tflite_quantized/model.tflite";
}
@Override
protected String getLabelPath() {
return "converted_tflite_quantized/labels.txt";
}