GeforceRTX2080Ti SSDInceptionV2 - wom-ai/inference_results_v0.5 GitHub Wiki

Contents

2020-02-06

INT CHW4 Performance Only (C++)

[2020-02-06 06:00:48,733 main.py:295 INFO] Using config files: measurements/GeforceRTX2080Ti/ssd-small/SingleStream/config.json
[2020-02-06 06:00:48,734 __init__.py:142 INFO] Parsing config file measurements/GeforceRTX2080Ti/ssd-small/SingleStream/config.json ...
[2020-02-06 06:00:48,734 main.py:299 INFO] Processing config "GeforceRTX2080Ti_ssd-small_SingleStream"
[2020-02-06 06:00:48,734 main.py:115 INFO] Running harness for ssd-small benchmark in SingleStream scenario...
[2020-02-06 06:00:48,736 __init__.py:42 INFO] Running command: ./build/bin/harness_default --plugins="build/plugins/NMSOptPlugin/libnmsoptplugin.so" --logfile_outdir="/work/mlperf/inference_results_v0.5/closed/NVIDIA/build/logs/2020.02.06-06.00.48/GeforceRTX2080Ti/ssd-small/SingleStream" --logfile_prefix="mlperf_log_" --test_mode="PerformanceOnly" --use_graphs=false --gpu_batch_size=1 --map_path="data_maps/coco/val_map.txt" --tensor_path="${PREPROCESSED_DATA_DIR}/coco/val2017/SSDMobileNet/int8_chw4" --gpu_engines="./build/engines/GeforceRTX2080Ti/ssd-small/SingleStream/ssd-small-SingleStream-gpu-b1-int8.plan" --performance_sample_count=256 --max_dlas=0 --single_stream_expected_latency_ns=1621000 --mlperf_conf_path="measurements/GeforceRTX2080Ti/ssd-small/SingleStream/mlperf.conf" --user_conf_path="measurements/GeforceRTX2080Ti/ssd-small/SingleStream/user.conf" --scenario SingleStream --model ssd-small --response_postprocess coco
{'gpu_batch_size': 1, 'gpu_single_stream_expected_latency_ns': 1621000, 'input_dtype': 'int8', 'input_format': 'chw4', 'map_path': 'data_maps/coco/val_map.txt', 'precision': 'int8', 'tensor_path': '${PREPROCESSED_DATA_DIR}/coco/val2017/SSDMobileNet/int8_chw4', 'use_graphs': False, 'system_id': 'GeforceRTX2080Ti', 'scenario': 'SingleStream', 'benchmark': 'ssd-small', 'config_name': 'GeforceRTX2080Ti_ssd-small_SingleStream', 'test_mode': 'PerformanceOnly', 'log_dir': '/work/mlperf/inference_results_v0.5/closed/NVIDIA/build/logs/2020.02.06-06.00.48'}
&&&& RUNNING Default_Harness # ./build/bin/harness_default
[I] mlperf.conf path: measurements/GeforceRTX2080Ti/ssd-small/SingleStream/mlperf.conf
[I] user.conf path: measurements/GeforceRTX2080Ti/ssd-small/SingleStream/user.conf
[W] [TRT] TensorRT was linked against cuBLAS 10.2.0 but loaded cuBLAS 10.1.0
[I] Device:0: ./build/engines/GeforceRTX2080Ti/ssd-small/SingleStream/ssd-small-SingleStream-gpu-b1-int8.plan has been successfully loaded.
[W] [TRT] TensorRT was linked against cuBLAS 10.2.0 but loaded cuBLAS 10.1.0
[W] [TRT] TensorRT was linked against cuBLAS 10.2.0 but loaded cuBLAS 10.1.0
[W] [TRT] TensorRT was linked against cuBLAS 10.2.0 but loaded cuBLAS 10.1.0
[W] [TRT] TensorRT was linked against cuBLAS 10.2.0 but loaded cuBLAS 10.1.0
[I] Creating batcher thread: 0 EnableBatcherThreadPerDevice: false
Starting warmup. Running for a minimum of 5 seconds.
Finished warmup. Ran for 5.02079s.
================================================
MLPerf Results Summary
================================================
SUT name : LWIS_Server
Scenario : Single Stream
Mode     : Performance
90th percentile latency (ns) : 2058886
Result is : VALID
  Min duration satisfied : Yes
  Min queries satisfied : Yes

================================================
Additional Stats
================================================
QPS w/ loadgen overhead         : 500.89
QPS w/o loadgen overhead        : 509.88

Min latency (ns)                : 1682503
Max latency (ns)                : 11129228
Mean latency (ns)               : 1961241
50.00 percentile latency (ns)   : 1891033
90.00 percentile latency (ns)   : 2058886
95.00 percentile latency (ns)   : 2697564
97.00 percentile latency (ns)   : 2823167
99.00 percentile latency (ns)   : 3005627
99.90 percentile latency (ns)   : 4431690

================================================
Test Parameters Used
================================================
samples_per_query : 1
target_qps : 616.903
target_latency (ns): 0
max_async_queries : 1
min_duration (ms): 60000
max_duration (ms): 0
min_query_count : 1024
max_query_count : 0
qsl_rng_seed : 3133965575612453542
sample_index_rng_seed : 665484352860916858
schedule_rng_seed : 3622009729038561421
accuracy_log_rng_seed : 0
accuracy_log_probability : 0
print_timestamps : false
performance_issue_unique : false
performance_issue_same : false
performance_issue_same_index : 0
performance_sample_count : 256

No warnings encountered during test.

No errors encountered during test.
Device Device:0 processed:
  30055 batches of size 1
  Memcpy Calls: 0
  PerSampleCudaMemcpy Calls: 0
  BatchedCudaMemcpy Calls: 30055
&&&& PASSED Default_Harness # ./build/bin/harness_default
[2020-02-06 06:01:55,899 main.py:146 INFO] Result: 90th percentile latency (ns) : 2058886 and Result is : VALID

======================= Perf harness results: =======================

GeforceRTX2080Ti-SingleStream:
    ssd-small: 90th percentile latency (ns) : 2058886 and Result is : VALID


======================= Accuracy results: =======================

GeforceRTX2080Ti-SingleStream:
    ssd-small: No accuracy results in PerformanceOnly mode.

INT CHW4 Accuracy Only (C++)

[2020-02-06 06:01:56,434 main.py:295 INFO] Using config files: measurements/GeforceRTX2080Ti/ssd-small/SingleStream/config.json
[2020-02-06 06:01:56,434 __init__.py:142 INFO] Parsing config file measurements/GeforceRTX2080Ti/ssd-small/SingleStream/config.json ...
[2020-02-06 06:01:56,435 main.py:299 INFO] Processing config "GeforceRTX2080Ti_ssd-small_SingleStream"
[2020-02-06 06:01:56,435 main.py:115 INFO] Running harness for ssd-small benchmark in SingleStream scenario...
[2020-02-06 06:01:56,437 __init__.py:42 INFO] Running command: ./build/bin/harness_default --plugins="build/plugins/NMSOptPlugin/libnmsoptplugin.so" --logfile_outdir="/work/mlperf/inference_results_v0.5/closed/NVIDIA/build/logs/2020.02.06-06.01.56/GeforceRTX2080Ti/ssd-small/SingleStream" --logfile_prefix="mlperf_log_" --test_mode="AccuracyOnly" --use_graphs=false --gpu_batch_size=1 --map_path="data_maps/coco/val_map.txt" --tensor_path="${PREPROCESSED_DATA_DIR}/coco/val2017/SSDMobileNet/int8_chw4" --gpu_engines="./build/engines/GeforceRTX2080Ti/ssd-small/SingleStream/ssd-small-SingleStream-gpu-b1-int8.plan" --performance_sample_count=256 --max_dlas=0 --single_stream_expected_latency_ns=1621000 --mlperf_conf_path="measurements/GeforceRTX2080Ti/ssd-small/SingleStream/mlperf.conf" --user_conf_path="measurements/GeforceRTX2080Ti/ssd-small/SingleStream/user.conf" --scenario SingleStream --model ssd-small --response_postprocess coco
{'gpu_batch_size': 1, 'gpu_single_stream_expected_latency_ns': 1621000, 'input_dtype': 'int8', 'input_format': 'chw4', 'map_path': 'data_maps/coco/val_map.txt', 'precision': 'int8', 'tensor_path': '${PREPROCESSED_DATA_DIR}/coco/val2017/SSDMobileNet/int8_chw4', 'use_graphs': False, 'system_id': 'GeforceRTX2080Ti', 'scenario': 'SingleStream', 'benchmark': 'ssd-small', 'config_name': 'GeforceRTX2080Ti_ssd-small_SingleStream', 'test_mode': 'AccuracyOnly', 'log_dir': '/work/mlperf/inference_results_v0.5/closed/NVIDIA/build/logs/2020.02.06-06.01.56'}
&&&& RUNNING Default_Harness # ./build/bin/harness_default
[I] mlperf.conf path: measurements/GeforceRTX2080Ti/ssd-small/SingleStream/mlperf.conf
[I] user.conf path: measurements/GeforceRTX2080Ti/ssd-small/SingleStream/user.conf
[W] [TRT] TensorRT was linked against cuBLAS 10.2.0 but loaded cuBLAS 10.1.0
[I] Device:0: ./build/engines/GeforceRTX2080Ti/ssd-small/SingleStream/ssd-small-SingleStream-gpu-b1-int8.plan has been successfully loaded.
[W] [TRT] TensorRT was linked against cuBLAS 10.2.0 but loaded cuBLAS 10.1.0
[W] [TRT] TensorRT was linked against cuBLAS 10.2.0 but loaded cuBLAS 10.1.0
[W] [TRT] TensorRT was linked against cuBLAS 10.2.0 but loaded cuBLAS 10.1.0
[W] [TRT] TensorRT was linked against cuBLAS 10.2.0 but loaded cuBLAS 10.1.0
[I] Creating batcher thread: 0 EnableBatcherThreadPerDevice: false
Starting warmup. Running for a minimum of 5 seconds.
Finished warmup. Ran for 5.02155s.

No warnings encountered during test.

No errors encountered during test.
Device Device:0 processed:
  5000 batches of size 1
  Memcpy Calls: 0
  PerSampleCudaMemcpy Calls: 0
  BatchedCudaMemcpy Calls: 5000
&&&& PASSED Default_Harness # ./build/bin/harness_default
[2020-02-06 06:02:14,070 main.py:146 INFO] Result: Cannot find performance result. Maybe you are running in AccuracyOnly mode.
[2020-02-06 06:02:14,077 __init__.py:42 INFO] Running command: python3 build/inference/v0.5/classification_and_detection/tools/accuracy-coco.py --mlperf-accuracy-file /work/mlperf/inference_results_v0.5/closed/NVIDIA/build/logs/2020.02.06-06.01.56/GeforceRTX2080Ti/ssd-small/SingleStream/mlperf_log_accuracy.json             --coco-dir /work/mlperf/inference_results_v0.5/closed/NVIDIA/build/preprocessed_data/coco --output-file build/ssd-small-results.json
loading annotations into memory...
Done (t=0.42s)
creating index...
index created!
Loading and preparing results...
DONE (t=0.16s)
creating index...
index created!
Running per image evaluation...
Evaluate annotation type *bbox*
DONE (t=13.93s).
Accumulating evaluation results...
DONE (t=2.25s).
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.276
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.401
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.301
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.026
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.200
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.627
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.239
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.306
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.307
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.033
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.231
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.693
mAP=27.554%

======================= Perf harness results: =======================

GeforceRTX2080Ti-SingleStream:
    ssd-small: Cannot find performance result. Maybe you are running in AccuracyOnly mode.


======================= Accuracy results: =======================

GeforceRTX2080Ti-SingleStream:
    ssd-small: Accuracy = 27.554, Threshold = 21.780. Accuracy test PASSED.

INT CHW4 Inference (C++)

$ sh run_infer_geforcertx2080ti_int8_chw4.sh
[2020-02-06 05:58:09,721 infer.py:144 INFO] Running accuracy test...
[2020-02-06 05:58:09,721 infer.py:58 INFO] Running SSDMobileNet functionality test for engine [ ./build/engines/GeforceRTX2080Ti/ssd-small/SingleStream/ssd-small-SingleStream-gpu-b1-int8.plan ] with batch size 1
[TensorRT] VERBOSE: Plugin Creator registration succeeded - GridAnchor_TRT
[TensorRT] VERBOSE: Plugin Creator registration succeeded - NMS_TRT
[TensorRT] VERBOSE: Plugin Creator registration succeeded - Reorg_TRT
[TensorRT] VERBOSE: Plugin Creator registration succeeded - Region_TRT
[TensorRT] VERBOSE: Plugin Creator registration succeeded - Clip_TRT
[TensorRT] VERBOSE: Plugin Creator registration succeeded - LReLU_TRT
[TensorRT] VERBOSE: Plugin Creator registration succeeded - PriorBox_TRT
[TensorRT] VERBOSE: Plugin Creator registration succeeded - Normalize_TRT
[TensorRT] VERBOSE: Plugin Creator registration succeeded - RPROI_TRT
[TensorRT] VERBOSE: Plugin Creator registration succeeded - BatchedNMS_TRT
[TensorRT] VERBOSE: Plugin Creator registration succeeded - FlattenConcat_TRT
[TensorRT] WARNING: TensorRT was linked against cuBLAS 10.2.0 but loaded cuBLAS 10.1.0
[TensorRT] VERBOSE: Deserialize required 1371080 microseconds.
[2020-02-06 05:58:11,351 runner.py:38 INFO] Binding Input
[2020-02-06 05:58:11,351 runner.py:38 INFO] Binding Postprocessor
[TensorRT] WARNING: TensorRT was linked against cuBLAS 10.2.0 but loaded cuBLAS 10.1.0
[2020-02-06 05:58:11,779 infer.py:85 INFO] Running validation on 200 images. Please wait...
[2020-02-06 05:58:11,792 infer.py:95 INFO] Batch 0 >> Inference time:  0.006926
[2020-02-06 05:58:11,795 infer.py:95 INFO] Batch 1 >> Inference time:  0.002214
[2020-02-06 05:58:11,798 infer.py:95 INFO] Batch 2 >> Inference time:  0.002206
[2020-02-06 05:58:11,801 infer.py:95 INFO] Batch 3 >> Inference time:  0.002210
[2020-02-06 05:58:11,804 infer.py:95 INFO] Batch 4 >> Inference time:  0.002208
[2020-02-06 05:58:11,806 infer.py:95 INFO] Batch 5 >> Inference time:  0.002204
[2020-02-06 05:58:11,809 infer.py:95 INFO] Batch 6 >> Inference time:  0.002211
[2020-02-06 05:58:11,812 infer.py:95 INFO] Batch 7 >> Inference time:  0.002207
[2020-02-06 05:58:11,814 infer.py:95 INFO] Batch 8 >> Inference time:  0.002217
[2020-02-06 05:58:11,817 infer.py:95 INFO] Batch 9 >> Inference time:  0.002196

...

[2020-02-06 05:58:12,231 infer.py:95 INFO] Batch 180 >> Inference time:  0.001653
[2020-02-06 05:58:12,233 infer.py:95 INFO] Batch 181 >> Inference time:  0.001662
[2020-02-06 05:58:12,235 infer.py:95 INFO] Batch 182 >> Inference time:  0.001654
[2020-02-06 05:58:12,237 infer.py:95 INFO] Batch 183 >> Inference time:  0.001659
[2020-02-06 05:58:12,239 infer.py:95 INFO] Batch 184 >> Inference time:  0.001661
[2020-02-06 05:58:12,242 infer.py:95 INFO] Batch 185 >> Inference time:  0.001664
[2020-02-06 05:58:12,244 infer.py:95 INFO] Batch 186 >> Inference time:  0.001714
[2020-02-06 05:58:12,246 infer.py:95 INFO] Batch 187 >> Inference time:  0.001654
[2020-02-06 05:58:12,248 infer.py:95 INFO] Batch 188 >> Inference time:  0.001659
[2020-02-06 05:58:12,250 infer.py:95 INFO] Batch 189 >> Inference time:  0.001657
[2020-02-06 05:58:12,252 infer.py:95 INFO] Batch 190 >> Inference time:  0.001657
[2020-02-06 05:58:12,255 infer.py:95 INFO] Batch 191 >> Inference time:  0.001661
[2020-02-06 05:58:12,257 infer.py:95 INFO] Batch 192 >> Inference time:  0.001664
[2020-02-06 05:58:12,259 infer.py:95 INFO] Batch 193 >> Inference time:  0.001656
[2020-02-06 05:58:12,261 infer.py:95 INFO] Batch 194 >> Inference time:  0.001665
[2020-02-06 05:58:12,263 infer.py:95 INFO] Batch 195 >> Inference time:  0.001659
[2020-02-06 05:58:12,265 infer.py:95 INFO] Batch 196 >> Inference time:  0.001687
[2020-02-06 05:58:12,268 infer.py:95 INFO] Batch 197 >> Inference time:  0.001667
[2020-02-06 05:58:12,270 infer.py:95 INFO] Batch 198 >> Inference time:  0.001669
[2020-02-06 05:58:12,272 infer.py:95 INFO] Batch 199 >> Inference time:  0.001663
[2020-02-06 05:58:13,326 infer.py:139 INFO] Get mAP score = 0.306068 Target = 0.223860
loading annotations into memory...
Done (t=0.40s)
creating index...
index created!
Loading and preparing results...
DONE (t=0.01s)
creating index...
index created!
Running per image evaluation...
Evaluate annotation type *bbox*
DONE (t=0.55s).
Accumulating evaluation results...
DONE (t=0.47s).
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.306
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.433
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.335
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.029
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.254
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.686
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.263
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.325
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.325
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.035
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.268
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.713