2.1 Train the model with darkflow - abhishekkumardwivedi/android-things-AI-Camera GitHub Wiki
Darkflow for training the model
There are ways to train the model, here I am using darkflow for this purpose of experimenting.
Tensorflow setup for Darkflow
Do install tensorflow as mentioned in tensorflow install.
There are several way tensorflow environment could be used, I used native pip, with installed pip3, I was getting some errors later while running flow.
// Try with sudo if getting error station permission denied.
$ sudo apt-get install python-pip python-dev # have pip
$ pip install --upgrade pip # better to have recent pip
$ pip install tensorflow # Python 3.n; CPU support (no GPU support)
$ pip install numpy # numpy is needed for darkflow training
$ pip install Cython # Cython is needed for darkflow training
Do check, if you want, with hello world example as shown in tensorflow install page.
Darkflow setup
$ git clone https://github.com/thtrieu/darkflow.git # get darkflow
$ cd darkflow
$ pip install -e . # Install it globally in dev mode
# Installing another way should also be fine.
$ sudo pip install opencv-python # Required by darkflow flow script
# sudo permission might be needed.
$ flow -h # this should not give an error if all is good.
Now everything is set to start training our model.
Training
Once label and annotations are created for the raw images, we will have all to train the model. This is a killer job for the computer and would take long hours or days or months if having fun. I did train with intel core i7 which is a quad-core CPU. I didn't use GPU, obviously as my system doesn't have nVedia GPU. Working on training for the object and not having GPU powered PC is a kind of big joke which I did realize after trying with first time trying the model. It took about 20-22 hours of training with about 100 images, all four CPU cores 100% occupied in training images. What would happen if there are 1000, 10 days? 10,000 images, 100 days. That's why working with CPU only PC for model training is next to not feasible in the practice scenario. Anyways, let's turn on PC cooling vent and shoot a training. Below is the command for training model. I needed to retrain image to see the effect if it works well. I did re-trained the model with about 10-20 images only and 40 epoch. I was not sure if I am the only one doing and raise the query on stackoverflow and spoke to people and found that it is not an unusual thing I am doing.
Begin with training on own datasets
$ cp cfg/tiny-yolo-4c.cfg cfg/tiny-yolo-lpc.cfg
$ vi cfg/tiny-yolo-lpc.cfg
...
[region]
...
classes=1
...
...
[convolutional]
...
filters=30
...
...
Here classes is 1 as we have only one object to tain for and filter = (5*classes + 1)*5, kind of thumb rule here.
$ echo "licenseplate" > ./labels.txt
$ cd bin && wget https://pjreddie.com/media/files/tiny-yolo-voc.weights && cd -
$ mkdir -p train/Annotations && mkdir train/Images
$ flow \
--model cfg/tiny-yolo-lpc.cfg \
--load bin/tiny-yolo-voc.weights \
--train \
--annotation ../rawImages/indialicenseplateAnnotation \
--dataset ../rawImages/indialicenseplate
--epoch 2000
//--gpu
## Saving the latest checkpoint to protobuf file
flow --model cfg/tiny-yolo-lpc.cfg --load -1 --savepb
Do all of the above in Google Cloud
Take install of google cloud and do all of the above training procedure at cloud at fraction of time taken. To create an instance of google cloud refer google cloud instructions.