Model Training - dlx-designlab/Attune GitHub Wiki
Tagging images
We are using the labelme software to annotate the capillary images.
The main reason is, this software has the option to annotae images using circles, which is very handy when it comes to labelling capillary apexes.
After installing labelme, load all the capillary images you would like to annotate and use the circle annotation tool to label all the capillary apexes in all of the images.
Make sure all the selected apexes have the same label e.g. "CAP".
When done labelling an image, save your progress. Labelme will create a .json file alongside each image file.
Use the images and the .json files as described in the next step.
An example of an annotated image:
Cropping and Exporting tagged images
Before importing to roboflow convert the .json files to .txt files using label_converter.py.
This process takes the circles from labelme and converts them to squares, padding the squares by 10px in each direction. This padding improves object detection.
Roboflow YOLOv4
Roboflow
Open Roboflow and create a new dataset with "Object Detection (Bounding Box)" as the Dataset Type.
Upload the images and .txt files to Roboflow, the images should be previewed with bounding boxes drawn over the apexes.
Generate a version of the dataset without resize (Auto-Orient is fine).
Under "Export your dataset" select the "YOLO Darknet" format and click "Get Link".
Copy the Jupyter snippet.
Colab
Open the training Colab and replace the curl command under "Set up Custom Dataset for YOLOv4" with the one you copied previously.
Run the first two commands, if you don't have a Tesla T4 refresh the page and try again - it's substantially quicker.
Run the rest of the commands.
Download the cfg/custom-yolov4-tiny-detector.cfg and backup/backup/custom-yolov4-tiny-detector_best.weights files from the darknet folder in the Colab sidebar.
Convert to TensorTR
At the end of the previous step, you should have 2 files: .cfg and .weights
You can also download the latest version from here
You will need to convert them to an ONNX and then to TRT format on the machine that will be running the detection software.
The instructions below are for the NVIDIA Jetson Nano and Xavier NX Platforms
-
Install dependencies required for YOLOv4 (based on this post):
-
Set proper environment variables:
mkdir ~/Projects cd ~/Projects git clone https://github.com/jkjung-avt/jetson_nano.git cd jetson_nano ./install_basics.sh source ${HOME}/.bashrc
-
Install dependencies for python3 OpenCV
sudo apt-get install -y build-essential make cmake cmake-curses-gui git g++ pkg-config curl libfreetype6-dev libcanberra-gtk-module libcanberra-gtk3-module sudo apt-get install -y python3-dev python3-testresources python3-pip sudo pip3 install -U pip Cython
-
This script will take quite a while to execute (~1h):
sudo apt-get install automake cd ~/Projects/jetson_nano ./install_protobuf-3.8.0.sh
-
Install numpy and matplotlib.
Ignore NumPy version error as it will be upgraded in the next step
sudo pip3 install numpy matplotlib
-
Install Tensorflow:
sudo apt-get install -y libhdf5-serial-dev hdf5-tools libhdf5-dev zlib1g-dev zip libjpeg8-dev liblapack-dev libblas-dev gfortran sudo pip3 install -U pip testresources setuptools sudo pip3 install -U numpy==1.16.1 future mock h5py==2.10.0 keras_preprocessing keras_applications gast==0.2.2 futures pybind11 sudo pip3 install --pre --extra-index-url https://developer.download.nvidia.com/compute/redist/jp/v44 tensorflow==1.15.2
-
-
Make sure the G-Scope is connected and working (follow steps 7-10 here)
-
Compile and test the custom model:
-
Install dependencies + download conversion and test scripts:
sudo pip3 install onnx==1.4.1 cd ~/Projects git clone https://github.com/dlx-designlab/tensorrt_demos.git cd tensorrt_demos/ssd ./install_pycuda.sh cd ~/Projects/tensorrt_demos/plugins make
-
Copy the .cfg and .weights of the YOLO model to
~/Projects/tensorrt_demos/yolo
IMPORTANT: The files should have the same name, just different file extension/format -
Convert the files into .trt format (should take around 30 mins)
cd ~/project/tensorrt_demos/yolo python3 yolo_to_onnx.py -m [model-file-name-without-extension] -c 1 python3 onnx_to_tensorrt.py -m [model-file-name-without-extension] -c 1
-
-
Test the model (remember to have the G-Scope connected)
Note: You might need to update the model file name in thetrt_yolo.py
script itselfcd ~/Projects/tensorrt_demos/ python3 trt_yolo.py --usb 0 -m [model-file-name-without-extension] -c 1
Put your finger under the microscope and check if capillaries are being detected.
Roboflow YOLOv8 (on Raspbery Pi 5)
Roboflow
Open Roboflow and create a new dataset with "Object Detection (Bounding Box)" as the Dataset Type.
Upload the images and .txt files to Roboflow, the images should be previewed with bounding boxes drawn over the apexes.
Generate a version of the dataset without resize (Auto-Orient is fine).
Under "Export your dataset" select the "YOLOv8" format, check the show download code box.
Copy the Jupyter snippet.
Colab
Open the training Colab and replace the code block under "Step 5: EXporting dataset" with the one you copied previously then, run the code. Before running the next command, change the model parameter to yolov8n.pt, change the data parameter and put the correct location (Warning : you might need to also change some locations in the data.yaml file). Finally change the imgsz to 224 and you can choose a number of epochs. Download the apex-detection-8/runs/detect/train/weights/best.pt file from the content folder in the Colab sidebar.
Convert to OpenVino
At the end of the previous step, you should have a best.pt file.
You can also download the latest version from here
You will need to convert them to openvino on the machine that will be running the detection software.
The instructions below are for the Raspberry Pi 5
- Install the requirements.txt file from this project and run on the RPI5 the command
pip install -r requirements.txt
Note: If you encounter errors when using the pip command, use a python environment by following this link or run this command :
sudo mv /usr/lib/python3.11/EXTERNALLY-MANAGED /usr/lib/python3.11/EXTERNALLY-MANAGED.old
-
Make sure the G-Scope is connected and working (follow steps 8-15 here OR if it is only for testing simply plug it in)
-
Compile and test the custom model:
-
Download test scripts:
git clone [email protected]:dlx-designlab/Attune.git git switch scope_pi_app cd Attune/tests python3 testModel.py
-
Copy the .pt of the YOLO model to the /tests folder
-
The conversion to openvino will be done by executing the test script, you will then find a folder with the name best_openvino_model
-
-
Test the model (remember to have the G-Scope connected)
Note: You might need to update the model file name in the testModel.py script itself Note: If you have the error ModuleNotFoundError: No module named 'ultralytics.nn.modules.conv'; 'ultralytics.nn.modules' is not a package, runpip install --upgrade ultralytics
Put your finger under the microscope and check if capillaries are being detected.