Converting Yolo v3 models to TensorFlow and OpenVINO(IR) models - Sudhakar17/darknet GitHub Wiki

To use Yolo v3 model on OpenVINO framework, you should do 2 steps:

  1. Convert yolov3.cfg/yolov3.weights to TensorFlow model frozen_darknet_yolov3_model.pb
  2. Convert frozen_darknet_yolov3_model.pb to OpenVINO model frozen_darknet_yolov3_model.xml/.bin/.mapping

More: https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_YOLO_From_Tensorflow.html

Converting a Yolo v3 model: Darknet -> TensorFlow

sudo apt-get install unzip
unzip file.zip -d tensorflow-yolo-v3
cd tensorflow-yolo-v3

for tiny: python3 convert_weights_pb.py --class_names coco.names --data_format NHWC --weights_file yolov3-tiny.weights --tiny

  • you will get TensorFlow model frozen_darknet_yolov3_model.pb

Converting a Yolo v3 model: TensorFlow -> OpenVINO(IR)

After converting the Darknet Yolo v3 model to the TensorFlow model, you can convert it to the Openvino model.

  • Install OpenVINO: https://software.intel.com/en-us/openvino-toolkit/choose-download

  • Put these files to the one directory:

    • frozen_darknet_yolov3_model.pb - TensorFlow model that you got in previous stage
    • <OPENVINO_INSTALL_DIR>/deployment_tools/model_optimizer/extensions/front/tf/yolo_v3.json
    • <OPENVINO_INSTALL_DIR>/deployment_tools/model_optimizer/mo_tf.python
  • Run the command: python3 mo_tf.py -b 1 --input_model ./frozen_darknet_yolov3_model.pb --tensorflow_use_custom_operations_config ./yolo_v3.json --data_type FP16

  • You will get OpenVINO model that can be run on CPU, GPU, VPU (MyriadX), FPGA:

    • frozen_darknet_yolov3_model.xml - model structure
    • frozen_darknet_yolov3_model.bin - model weights
    • frozen_darknet_yolov3_model.mapping - mapping file

Run IR-OpenVINO model on CPU or VPU (Myriad X)

  1. Build C++ examples - run: <OPENVINO_INSTALL_DIR>/inference_engine/samples/build_samples.sh

    • cd /user/home/inference_engine_samples_build/intel64/Release

    • on CPU: ./object_detection_demo_yolov3_async -i ./test.mp4 -m ./frozen_darknet_yolov3_model.xml -d CPU

    • on VPU: ./object_detection_demo_yolov3_async -i ./test.mp4 -m ./frozen_darknet_yolov3_model.xml -d MYRIAD

  2. By using Python code:



Run Yolo v3 models on OpenCV-dnn with OpenVINO DL-IE backend:

cmake -D CMAKE_BUILD_TYPE=RELEASE \
 -D WITH_INF_ENGINE=ON \
 -D ENABLE_CXX11=ON \
 -D BUILD_EXAMPLES=OFF \
 -D WITH_FFMPEG=ON \
 -D WITH_V4L=OFF \
 -D WITH_LIBV4L=ON \
 -D OPENCV_ENABLE_PKG_CONFIG=ON \
 -D BUILD_TESTS=OFF \
 -D BUILD_PERF_TESTS=OFF \
 -D INF_ENGINE_LIB_DIRS="/opt/intel/openvino/deployment_tools/inference_engine/lib/intel64" \
 -D INF_ENGINE_INCLUDE_DIRS="/opt/intel/openvino/deployment_tools/inference_engine/include" \
 -D CMAKE_FIND_ROOT_PATH="/opt/intel/openvino/" \
 ..

make -j8
sudo make install

Where are:

--backend 0(auto), 1(Halide), 2(Intel-Engine), 3(OpenCV-impl)
--target 0(CPU), 1(OpenCL), 2(OpenCV-FP16), 3(VPU)