YOLOv4 model zoo - Sudhakar17/darknet GitHub Wiki

YOLOv4 model zoo

We provide a collection of COCO pre-trained files of YOLOv4 series models.

COCO pre-trained files (512x512)

Models used anchors of 512x512 input resolution for training.

Model Size AP AP50 AP75 APS APM APL cfg weights score
YOLOv4 512 43.0 64.9 46.5 24.3 46.1 55.2 cfg weights score
YOLOv4-Leaky 512 42.4 64.5 46.0 23.9 45.6 54.2 cfg weights score
YOLOv4-SAM-Leaky 512 42.7 64.5 46.4 24.1 45.3 54.8 cfg weights score
YOLOv4-Mish 512 43.2 64.7 46.9 24.4 46.1 55.4 cfg weights score
YOLOv4-SAM-Mish 512 43.4 65.0 47.2 24.6 46.7 55.5 cfg weights score
Model Size AP AP50 AP75 APS APM APL cfg weights score
YOLOv4 608 43.5 65.7 47.3 26.7 46.7 53.3 cfg weights score
YOLOv4-Leaky 608 42.9 65.3 46.8 26.1 46.3 52.3 cfg weights score
YOLOv4-SAM-Leaky 608 43.3 65.4 47.1 26.2 46.1 53.2 cfg weights score
YOLOv4-Mish 608 43.8 65.6 47.8 26.7 46.8 53.6 cfg weights score
YOLOv4-SAM-Mish 608 44.0 65.7 48.0 26.7 47.2 54.1 cfg weights score

COCO pre-trained files (416x416)

Models used anchors of 416x416 input resolution for training.

Model Size AP AP50 AP75 APS APM APL cfg weights score
YOLOv4-Leaky-416 416 40.7 62.7 43.9 21.4 43.7 54.0 cfg weights score
YOLOv4-Mish-416 416 41.5 63.3 44.7 21.9 44.4 55.3 cfg weights score

Select which model is suitable for training on your GPU

Here we list the minimum requirement GPU-RAM for training different models. However, we do not suggest you train the model with subdivisions equal or larger than 32, it will takes very long training time. We are developing the training method that can use less GPU-RAM to train a model.

  • for 4GB-RAM GPUs
    • YOLOv4-Leaky-416: batch=64, subdivisions=64.
  • for 6GB-RAM GPUs
    • YOLOv4-Leaky-416: batch=64, subdivisions=32.
    • YOLOv4-Mish-416: batch=64, subdivisions=64.
  • for 8GB-RAM GPUs
    • YOLOv4-Mish-416: batch=64, subdivisions=32.
  • for 11GB-RAM GPUs (1080ti, 2080ti)
    • YOLOv4-Leaky-416: batch=64, subdivisions=16.
    • YOLOv4-Mish-416: batch=64, subdivisions=16.
  • for 12GB-RAM GPUs (titan x, titan xp, titan v)
    • YOLOv4-Leaky: batch=64, subdivisions=16.
    • YOLOv4-SAM-Leaky: batch=64, subdivisions=16.
  • for 16GB-RAM GPUs (p100)
    • YOLOv4: batch=64, subdivisions=16.
    • YOLOv4-Mish: batch=64, subdivisions=16.
    • YOLOv4-SAM-Mish: batch=64, subdivisions=16.
  • for 24GB-RAM GPUs (titan rtx, rtx 6000)
    • YOLOv4-Leaky: batch=64, subdivisions=8.
    • YOLOv4-SAM-Leaky: batch=64, subdivisions=8.
  • for 32GB-RAM GPUs (v100, v100s)
    • YOLOv4: batch=64, subdivisions=8.
    • YOLOv4-Mish: batch=64, subdivisions=8.
    • YOLOv4-SAM-Mish: batch=64, subdivisions=8.

Model descriptions

  • YOLOv4

    • The model in YOLOv4 paper.
    • Backbone - CSPDarknet53 with Mish activation
    • Neck - PANet with Leaky activation
    • Plugin Modules - SPP
    • V100 FPS - 62@608x608, 83@512x512
    • BFLOPs - 128.5@608x608
  • YOLOv4-Leaky

    • Backbone - CSPDarknet53 with Leaky activation
    • Neck - PANet with Leaky activation
    • Plugin Modules - SPP
    • BFLOPs - 128.5@608x608
  • YOLOv4-SAM-Leaky

    • Backbone - CSPDarknet53 with Leaky activation
    • Neck - PANet with Leaky activation
    • Plugin Modules - SPP, SAM
    • BFLOPs - 130.7@608x608
  • YOLOv4-Mish

    • Backbone - CSPDarknet53 with Mish activation
    • Neck - PANet with Mish activation
    • Plugin Modules - SPP
    • BFLOPs - 128.5@608x608
  • YOLOv4-SAM-Mish

    • Backbone - CSPDarknet53 with Mish activation
    • Neck - PANet with Mish activation
    • Plugin Modules - SPP, SAM
    • V100 FPS - 61@608x608, 81@512x512
    • BFLOPs - 130.7@608x608
  • YOLOv4-CSP

    • will update
    • Backbone - CSPDarknet53 with Mish activation
    • Neck - CSPPANet with Mish activation
    • Plugin Modules - SPP
  • YOLOv4-CSP-SAM

    • will update
    • Backbone - CSPDarknet53 with Mish activation
    • Neck - CSPPANet with Mish activation
    • Plugin Modules - SPP, SAM
⚠️ **GitHub.com Fallback** ⚠️