Skip to content

Object Detection (YOLOv2)

guijacquemet edited this page Aug 5, 2020 · 3 revisions

YOLOv2 is an object detection network developed by Redmon & Farhadi, which identifies objects in images and draws bounding boxes around them. Here, we have adapted a Keras version of YoloV2. While Yolo can be used for any object detection task, we demonstrate how the notebook can be used on a hand-labelled example dataset of migrating cells where cells are identified as elongated, rounded, dividing or spread-out cells.

Important disclaimer

Our pix2pix notebook is based on the following paper:

Please also cite this original paper when using or developing our notebook.

Data required to train YOLOv2

Training an object detection notebook requires a dataset with annotations which means images with objects identified and labelled by a human. To train on a custom dataset such as specific cells types requires hand-annotating a dataset. The training dataset will then consist of the raw images as inputs and as targets the corresponding files containing the coordinates of all the bounding boxes and classifications in a given image. To use this notebook these target files need to be .xml files in the PASCAL VOC format. To create such a dataset on custom examples, we used a simple web-tool, makesense.ai, which allows to upload images (which need to be .jpg or .png format) and label them in a simple GUI. To replicate this on your own dataset, follow these steps:

  1. Go to makesense.ai and click on 'Get Started'

  1. Upload your images - click on the main box to browse your files (need to be .png or .jpg, not .tif)
  2. When your images are uploaded select 'Object Detection'

  1. Create a labels list with the names of the classes you want to identify in your dataset. You can add these classes by clicking on the '+' in the top left corner. (If you forget something you can always add more labels later by clicking on the ''Update Label Names' on the top and then on '+' in the dialogue box.)

  1. When you have finished your labels list, select 'Going on my own' - and leave the boxes unchecked.

  1. Now you can start labelling your image. Draw bounding boxes with a cursor, and then select the label name, on the right-hand side, by clicking on 'Select Label' and then choosing from the dropdown list.

  1. When all images are labelled to satisfaction, click on 'Export Labels' on the top right and select A .zip package containing files in VOC XML format. Leave the other boxes unchecked. Then click export.

  1. The labels will have the name of the original image file with an .xml suffix.

  2. Put your source images and target annotations in separate folders, upload them to your drive and you're ready to start with the training.

Training YOLOv2 in google colab

Network Link to example training and test dataset Direct link to notebook in Colab
YOLOv2 here Open In Colab

or:

To train YOLOv2 in Google Colab: