Converting Yolo v3 models to TensorFlow and OpenVINO(IR) models - Sudhakar17/darknet GitHub Wiki
To use Yolo v3 model on OpenVINO framework, you should do 2 steps:
- Convert
yolov3.cfg/yolov3.weights
to TensorFlow modelfrozen_darknet_yolov3_model.pb
- Convert
frozen_darknet_yolov3_model.pb
to OpenVINO modelfrozen_darknet_yolov3_model.xml
/.bin
/.mapping
Converting a Yolo v3 model: Darknet -> TensorFlow
-
Download this repository: https://github.com/mystic123/tensorflow-yolo-v3/archive/ed60b9087b04e1d9ca40f8a9d8455d5c30c7c0d3.zip
-
un-pack it:
sudo apt-get install unzip
unzip file.zip -d tensorflow-yolo-v3
cd tensorflow-yolo-v3
-
Download default Yolo v3 model: https://pjreddie.com/media/files/yolov3.weights
-
Download
coco.names
file: https://raw.githubusercontent.com/AlexeyAB/darknet/master/data/coco.names -
run this command
python3 convert_weights_pb.py --class_names coco.names --data_format NHWC --weights_file yolov3.weights
for tiny: python3 convert_weights_pb.py --class_names coco.names --data_format NHWC --weights_file yolov3-tiny.weights --tiny
- you will get TensorFlow model
frozen_darknet_yolov3_model.pb
Converting a Yolo v3 model: TensorFlow -> OpenVINO(IR)
After converting the Darknet Yolo v3 model to the TensorFlow model, you can convert it to the Openvino model.
-
Install OpenVINO: https://software.intel.com/en-us/openvino-toolkit/choose-download
-
Put these files to the one directory:
frozen_darknet_yolov3_model.pb
- TensorFlow model that you got in previous stage<OPENVINO_INSTALL_DIR>/deployment_tools/model_optimizer/extensions/front/tf/yolo_v3.json
<OPENVINO_INSTALL_DIR>/deployment_tools/model_optimizer/mo_tf.python
-
Run the command:
python3 mo_tf.py -b 1 --input_model ./frozen_darknet_yolov3_model.pb --tensorflow_use_custom_operations_config ./yolo_v3.json --data_type FP16
-
You will get OpenVINO model that can be run on CPU, GPU, VPU (MyriadX), FPGA:
frozen_darknet_yolov3_model.xml
- model structurefrozen_darknet_yolov3_model.bin
- model weightsfrozen_darknet_yolov3_model.mapping
- mapping file
Run IR-OpenVINO model on CPU or VPU (Myriad X)
-
Build C++ examples - run:
<OPENVINO_INSTALL_DIR>/inference_engine/samples/build_samples.sh
-
cd /user/home/inference_engine_samples_build/intel64/Release
-
on CPU:
./object_detection_demo_yolov3_async -i ./test.mp4 -m ./frozen_darknet_yolov3_model.xml -d CPU
-
on VPU:
./object_detection_demo_yolov3_async -i ./test.mp4 -m ./frozen_darknet_yolov3_model.xml -d MYRIAD
-
-
By using Python code:
-
go to
<OPENVINO_INSTALL_DIR>/inference_engine/samples/python_samples
-
python3 object_detection_demo_yolov3_async.py -i test.mp4 -m ./frozen_darknet_yolov3_model.xml -d CPU
-
-
Also read: https://github.com/PINTO0309/OpenVINO-YoloV3
Run Yolo v3 models on OpenCV-dnn with OpenVINO DL-IE backend:
- Install OpenCV by using these bash-commands: https://github.com/opencv/opencv/wiki/Intel's-Deep-Learning-Inference-Engine-backend
cmake -D CMAKE_BUILD_TYPE=RELEASE \
-D WITH_INF_ENGINE=ON \
-D ENABLE_CXX11=ON \
-D BUILD_EXAMPLES=OFF \
-D WITH_FFMPEG=ON \
-D WITH_V4L=OFF \
-D WITH_LIBV4L=ON \
-D OPENCV_ENABLE_PKG_CONFIG=ON \
-D BUILD_TESTS=OFF \
-D BUILD_PERF_TESTS=OFF \
-D INF_ENGINE_LIB_DIRS="/opt/intel/openvino/deployment_tools/inference_engine/lib/intel64" \
-D INF_ENGINE_INCLUDE_DIRS="/opt/intel/openvino/deployment_tools/inference_engine/include" \
-D CMAKE_FIND_ROOT_PATH="/opt/intel/openvino/" \
..
make -j8
sudo make install
-
Compile this sample by using
make
command: ocv_yolov3.zipOr use original sample: https://github.com/opencv/opencv/blob/master/samples/dnn/object_detection.cpp
-
Download
yolov3-tiny.cfg
&yolov3-tiny.weights
files: https://pjreddie.com/darknet/yolo/ -
Run command:
./ocv_yolo --config=yolov3-tiny.cfg --model=yolov3-tiny.weights --input=test.mp4 --width=416 --height=416 --classes=coco.names --scale=0.00392 --rgb --backend=2 --target=3
Where are:
--backend 0(auto), 1(Halide), 2(Intel-Engine), 3(OpenCV-impl)
--target 0(CPU), 1(OpenCL), 2(OpenCV-FP16), 3(VPU)