Using the ArcCi Training GUI - CjMoor3/ArcCI-Collab-Repo GitHub Wiki

Overview

The Arc-CI Deep Learning Training GUI is designed with the purpose of providing all the tools necessary for the manual creation of training data that is both consistently optimal for use in deep learning, and easy to produce in large quantities. The GUI offers the ability to re-size, re-scale, re-segment, and crop images to produce an image library with uniform clarity and an ideal size for training samples. Images of size 256x256 pixels are ideal for use with convolutional neural networks (Rukundo 2021), and the GUI provides an “Auto Button” to grant the user instant access to the perfect re-sizing parameters to create a 256x256 image.

The GUI has two viewing windows – the one on the left is known as the “Image Window” where the uploaded image is displayed, and the one on the right is known as the “Classification Window” where automatically drawn image segments are drawn, and segments can be chosen and classified. The square on the image window is called the “Zoom Window” and it covers the area that is displayed on the classification window. The zoom window also provides the boundaries for how an image will be cropped when the crop image Button is pressed. Pressing the “Auto Button” that is adjacent to the zoom sliders will fix this square to be 256x256 pixels every time, and we can tell when this is the case because the zoom window will turn from red to blue. This also highlights the necessity of checking the classification window to ensure the potential cropped images provide clear and distinguishable features to be evaluated and classified manually.

TODO: ADD IMAGES OF TRAINING GUI

See our tutorial on YouTube

Installation and Tutorial

Training Data Collection Process

While the processes of image cropping and classification can be done in any order to achieve quality results, it is recommended that the overall process of data collection be split up into two distinct phases. The first of which we call the “Image Cropping Phase” and it involves cropping as many high-quality 256x256 image samples as possible from a folder full of larger images. The second we call the “Classification Phase” and it involves using a folder full of 256x256 images created in the cropping phase to create segmentation masks for training by re-drawing image segments to be as distinct as possible, and then classifying those segments with the classification buttons. Understanding these tasks as independent provides the ability to delegate image cropping to a team member while another team member classifies segments on cropped images- saving an immense amount of time and allowing for the creation of many high-quality samples as a result.

It is recommended that a storage method be shared to speed up file transfer between users. A network drive or cloud is ideal for data transfer efficiency in this case; whereas external USB drives traded between users are possibly efficient- they pose a data integrity risk due to the possibility of loss, damage, or corruption. Sending data through email or another communication platform that compresses files or has file size limits is discouraged.

Image Cropping Phase

The goal of this phase is to collect an image folder with large quantities of 256x256 pixel satellite images. To perform this phase of data collection, all the user must do is fill their training GUI image read folder with satellite images larger than 256x256, and then crop 256x256 sized images from those source images. To do so, the user must use the red zoom window to find a desirable portion of the larger image, set the zoom size to auto (turning the zoom window blue), and press crop image for each image collected. In pressing crop image, the user creates a new image from the zoom window that is sent to the training GUI write folder. Please exercise caution in pressing crop image so that the button is not pressed multiple times resulting in duplicate images.

#ToDO: add a video.

Segment Classification Phase

Whereas the image cropping phase is simple, the segment classification phase is considerably more involved. It is recommended that the user reference the video tutorial for this section, at least until they are comfortable with this phase.

If you have completed the image cropping phase and intend to classify the images you have created:

  1. Close the training GUI if it is still open.
  2. Move the 256x256 images that were created in the image cropping phase from the GUI write folder into the GUI read folder and initialize those images.
  3. Change segmentation parameters (Gauss Sigma, Feature Separation) to accurately segment image features.
  4. Select segments and use classification buttons to either annotate the segment with a classification or leave it blank.
  5. Save data, passing all user classifications to the dataset.

If you are classifying images cropped by another user:

  1. Ensure images sent by the other user are 256x256 pixels. If so, continue if not- please perform the image cropping phase (seen above) on these images before moving to the next step.
  2. Move images sent by the other user into the GUI read folder, and initialize those images.
  3. Change segmentation parameters (Gauss Sigma, Feature Separation) to accurately segment image features.
  4. Select segments and use classification buttons to either annotate the segment with a classification or leave it blank.
  5. Save data, passing all user classifications to the dataset.

At this point, the data should have been transformed from raw satellite images of varying sizes to images of size 256x256, to classification masks saved in COCO files with the .json extension. The segment classification phase should end when you have exhausted all 256x256 images to classify, in which case you should create or seek out more images of that size to be made in the image cropping phase. The last index in the image read queue is linked to the first index, meaning images are cycled between, and never moved from or within the queue. This will require users to be aware of which image they are classifying, and be able to check the COCO data to ensure that no image is classified twice.

NOTE Data must be saved before it is represented in the training GUI's target dataset - pressing "Save as" bypasses the target dataset and prompts the user to create a new target dataset which they name.