3 ‐ Export - Af-Oliveira/WasteVision GitHub Wiki

Model Export with exporter.py

The exporter.py script allows you to export your trained YOLOv8 models for deployment on different platforms and hardware. You can access this feature by selecting the Export option in your virtual environment's main menu.

1. How It Works

  • Model Selection:
    When you choose the Export option, a graphical form will appear. You can select the latest model you have trained (for example, best.pt in your models directory).

  • Export Format:
    Currently, the only export format available is ncnn. This format is optimized for efficient inference on devices with limited resources, such as Raspberry Pi or other edge devices.
    The script is designed to be extensible, so you are free to modify the SUPPORTED_FORMATS list and add more export options as needed.

  • Image Size:
    You will be prompted to specify the image size (imgsz) that matches the size used during training. This ensures the exported model is compatible with your data pipeline.

  • Simplify Option:
    The simplify option, if enabled, will simplify the model graph during export. This can reduce the model's complexity, improve inference speed, and make deployment easier on low-power hardware. In most cases, enabling this option is recommended for edge deployment.

  • Export Process:
    After you fill in the form and confirm, the system will export the model in the selected format and save the exported files in the same directory as your original model.

2. Performance Benefits

Exporting your model to the ncnn format can significantly improve inference performance and reduce resource usage, especially when running on low-power hardware like a Raspberry Pi.

3. Automatic Dependency Handling

To use the export feature, the system requires the ncnn Python package.
You do not need to install it manually—the script will automatically detect if ncnn is missing and install it for you before proceeding with the export.

4. Notes

  • By using the export feature, you ensure your models are ready for efficient deployment on a wide range of devices, including those with limited computational resources.
  • You are free to extend the script to support additional export formats as your project evolves.