models ocsort_yolox_x_crowdhuman_mot17 private half - Azure/azureml-assets GitHub Wiki
ocsort_yolox_x_crowdhuman_mot17-private-half
model is from OpenMMLab's MMTracking library. Multi-Object Tracking (MOT) has rapidly progressed with the development of object detection and re-identification. However, motion modeling, which facilitates object association by forecasting short-term trajec- tories with past observations, has been relatively under-explored in recent years. Current motion models in MOT typically assume that the object motion is linear in a small time window and needs continuous observations, so these methods are sensitive to occlusions and non-linear motion and require high frame-rate videos. In this work, we show that a simple motion model can obtain state-of-the-art tracking performance without other cues like appearance. We emphasize the role of “observation” when recovering tracks from being lost and reducing the error accumulated by linear motion models during the lost period. We thus name the proposed method as Observation-Centric SORT, OC-SORT for short. It remains simple, online, and real-time but improves robustness over occlusion and non-linear motion. It achieves 63.2 and 62.1 HOTA on MOT17 and MOT20, respectively, surpassing all published methods. It also sets new states of the art on KITTI Pedestrian Tracking and DanceTrack where the object motion is highly non-linear.
The model developers used CrowdHuman + MOT17-half-train dataset for training the model.
Training Techniques:
- SGD with Momentum
Training Resources: 8x V100 GPUs
MOTA: 77.8 IDF1: 78.4
apache-2.0
Inference type | Python sample (Notebook) | CLI with YAML |
---|---|---|
Real time | video-multi-object-tracking-online-endpoint.ipynb | video-multi-object-tracking-online-endpoint.sh |
Task | Use case | Dataset | Python sample (Notebook) | CLI with YAML |
---|---|---|---|---|
Video multi-object tracking | Video multi-object tracking | MOT17 tiny | mot17-tiny-video-multi-object-tracking.ipynb | mot17-tiny-video-multi-object-tracking.sh |
{
"input_data": {
"columns": [
"video"
],
"data": ["video_link"]
}
}
Note: "video_link" should be a publicly accessible url.
[
{
"det_bboxes": [
{
"box": {
"topX": 703.9149780273,
"topY": -5.5951070786,
"bottomX": 756.9875488281,
"bottomY": 158.1963806152
},
"label": 0,
"score": 0.9597821236
},
{
"box": {
"topX": 1487.9072265625,
"topY": 67.9468841553,
"bottomX": 1541.1591796875,
"bottomY": 217.5476837158
},
"label": 0,
"score": 0.9568068385
}
],
"track_bboxes": [
{
"box": {
"instance_id": 0,
"topX": 703.9149780273,
"topY": -5.5951070786,
"bottomX": 756.9875488281,
"bottomY": 158.1963806152
},
"label": 0,
"score": 0.9597821236
},
{
"box": {
"instance_id": 1,
"topX": 1487.9072265625,
"topY": 67.9468841553,
"bottomX": 1541.1591796875,
"bottomY": 217.5476837158
},
"label": 0,
"score": 0.9568068385
}
],
"frame_id": 0,
"video_url": "video_link"
}
]
Version: 6
SharedComputeCapacityEnabled
license : apache-2.0
model_specific_defaults : ordereddict({'apply_deepspeed': 'false', 'apply_ort': 'false'})
task : multi-object-tracking
hiddenlayerscanned
openmmlab_model_id : ocsort_yolox_x_crowdhuman_mot17-private-half
finetune_compute_allow_list : ['Standard_NC4as_T4_v3', 'Standard_NC6s_v3', 'Standard_NC8as_T4_v3', 'Standard_NC12s_v3', 'Standard_NC16as_T4_v3', 'Standard_NC24s_v3', 'Standard_NC64as_T4_v3', 'Standard_NC96ads_A100_v4', 'Standard_ND96asr_v4', 'Standard_ND96amsr_A100_v4', 'Standard_ND40rs_v2']
inference_compute_allow_list : ['Standard_NC4as_T4_v3', 'Standard_NC6s_v3', 'Standard_NC12s_v3', 'Standard_NC24s_v3', 'Standard_NC16as_T4_v3', 'Standard_NC64as_T4_v3', 'Standard_NC8as_T4_v3', 'Standard_NC96ads_A100_v4', 'Standard_ND40rs_v2', 'Standard_ND96amsr_A100_v4', 'Standard_ND96asr_v4']
View in Studio: https://ml.azure.com/registries/azureml/models/ocsort_yolox_x_crowdhuman_mot17-private-half/version/6
License: apache-2.0
SharedComputeCapacityEnabled: True
finetune-min-sku-spec: 4|1|28|176
finetune-recommended-sku: Standard_NC4as_T4_v3, Standard_NC6s_v3, Standard_NC8as_T4_v3, Standard_NC12s_v3, Standard_NC16as_T4_v3, Standard_NC24s_v3, Standard_NC64as_T4_v3, Standard_NC96ads_A100_v4, Standard_ND96asr_v4, Standard_ND96amsr_A100_v4, Standard_ND40rs_v2
finetuning-tasks: video-multi-object-tracking
inference-min-sku-spec: 4|1|28|176
inference-recommended-sku: Standard_NC4as_T4_v3, Standard_NC6s_v3, Standard_NC12s_v3, Standard_NC24s_v3, Standard_NC16as_T4_v3, Standard_NC64as_T4_v3, Standard_NC8as_T4_v3, Standard_NC96ads_A100_v4, Standard_ND40rs_v2, Standard_ND96amsr_A100_v4, Standard_ND96asr_v4