GeforceGTX1080Ti - wom-ai/inference_results_v0.5 GitHub Wiki
Contents
2020-06-02
INT8 CHW4 Performance mode only (C++)
# sh run_harness.sh
[2020-06-02 09:57:01,690 main.py:302 INFO] Using config files: measurements/GeforceGTX1080Ti/ssd-small/SingleStream/config.json
[2020-06-02 09:57:01,690 __init__.py:142 INFO] Parsing config file measurements/GeforceGTX1080Ti/ssd-small/SingleStream/config.json ...
[2020-06-02 09:57:01,690 main.py:306 INFO] Processing config "GeforceGTX1080Ti_ssd-small_SingleStream"
[2020-06-02 09:57:01,690 main.py:116 INFO] Running harness for ssd-small benchmark in SingleStream scenario...
BenchmarkHarness (
{'gpu_batch_size': 1, 'gpu_single_stream_expected_latency_ns': 1621000, 'input_dtype': 'int8', 'input_format': 'chw4', 'map_path': 'data_maps/coco/val_map.txt', 'precision': 'int8', 'tensor_path': '${PREPROCESSED_DATA_DIR}/coco/val2017/SSDMobileNet/int8_chw4', 'use_graphs': False, 'system_id': 'GeforceGTX1080Ti', 'scenario': 'SingleStream', 'benchmark': 'ssd-small', 'config_name': 'GeforceGTX1080Ti_ssd-small_SingleStream', 'test_mode': 'PerformanceOnly', 'warmup_duration': 20.0, 'log_dir': '/work/mlperf/inference_results_v0.5/closed/NVIDIA/build/logs/2020.06.02-09.57.01'}
BenchmarkHarness )
[2020-06-02 09:57:01,706 __init__.py:42 INFO] Running command: ./build/bin/harness_default --plugins="build/plugins/NMSOptPlugin/libnmsoptplugin.so" --logfile_outdir="/work/mlperf/inference_results_v0.5/closed/NVIDIA/build/logs/2020.06.02-09.57.01/GeforceGTX1080Ti/ssd-small/SingleStream" --logfile_prefix="mlperf_log_" --test_mode="PerformanceOnly" --warmup_duration=20.0 --use_graphs=false --gpu_batch_size=1 --map_path="data_maps/coco/val_map.txt" --tensor_path="${PREPROCESSED_DATA_DIR}/coco/val2017/SSDMobileNet/int8_chw4" --gpu_engines="./build/engines/GeforceGTX1080Ti/ssd-small/SingleStream/ssd-small-SingleStream-gpu-b1-int8.plan" --performance_sample_count=256 --max_dlas=0 --single_stream_expected_latency_ns=1621000 --mlperf_conf_path="measurements/GeforceGTX1080Ti/ssd-small/SingleStream/mlperf.conf" --user_conf_path="measurements/GeforceGTX1080Ti/ssd-small/SingleStream/user.conf" --scenario SingleStream --model ssd-small --response_postprocess coco
&&&& RUNNING Default_Harness # ./build/bin/harness_default
[I] mlperf.conf path: measurements/GeforceGTX1080Ti/ssd-small/SingleStream/mlperf.conf
[I] user.conf path: measurements/GeforceGTX1080Ti/ssd-small/SingleStream/user.conf
[W] [TRT] TensorRT was linked against cuBLAS 10.2.0 but loaded cuBLAS 10.1.0
[I] Device:0: ./build/engines/GeforceGTX1080Ti/ssd-small/SingleStream/ssd-small-SingleStream-gpu-b1-int8.plan has been successfully loaded.
[W] [TRT] TensorRT was linked against cuBLAS 10.2.0 but loaded cuBLAS 10.1.0
[W] [TRT] TensorRT was linked against cuBLAS 10.2.0 but loaded cuBLAS 10.1.0
[W] [TRT] TensorRT was linked against cuBLAS 10.2.0 but loaded cuBLAS 10.1.0
[W] [TRT] TensorRT was linked against cuBLAS 10.2.0 but loaded cuBLAS 10.1.0
[I] Creating batcher thread: 0 EnableBatcherThreadPerDevice: false
Starting warmup. Running for a minimum of 20 seconds.
Finished warmup. Ran for 20.013s.
================================================
MLPerf Results Summary
================================================
SUT name : LWIS_Server
Scenario : Single Stream
Mode : Performance
90th percentile latency (ns) : 1180175
Result is : VALID
Min duration satisfied : Yes
Min queries satisfied : Yes
================================================
Additional Stats
================================================
QPS w/ loadgen overhead : 842.59
QPS w/o loadgen overhead : 865.97
Min latency (ns) : 957857
Max latency (ns) : 6772567
Mean latency (ns) : 1154780
50.00 percentile latency (ns) : 1122845
90.00 percentile latency (ns) : 1180175
95.00 percentile latency (ns) : 1402822
97.00 percentile latency (ns) : 1649935
99.00 percentile latency (ns) : 1867928
99.90 percentile latency (ns) : 3139367
================================================
Test Parameters Used
================================================
samples_per_query : 1
target_qps : 616.903
target_latency (ns): 0
max_async_queries : 1
min_duration (ms): 60000
max_duration (ms): 0
min_query_count : 1024
max_query_count : 0
qsl_rng_seed : 3133965575612453542
sample_index_rng_seed : 665484352860916858
schedule_rng_seed : 3622009729038561421
accuracy_log_rng_seed : 0
accuracy_log_probability : 0
print_timestamps : false
performance_issue_unique : false
performance_issue_same : false
performance_issue_same_index : 0
performance_sample_count : 256
No warnings encountered during test.
No errors encountered during test.
Device Device:0 processed:
50557 batches of size 1
Memcpy Calls: 0
PerSampleCudaMemcpy Calls: 0
BatchedCudaMemcpy Calls: 50557
&&&& PASSED Default_Harness # ./build/bin/harness_default
[2020-06-02 09:58:22,880 main.py:153 INFO] Result: 90th percentile latency (ns) : 1180175 and Result is : VALID
======================= Perf harness results: =======================
GeforceGTX1080Ti-SingleStream:
ssd-small: 90th percentile latency (ns) : 1180175 and Result is : VALID
======================= Accuracy results: =======================
GeforceGTX1080Ti-SingleStream:
ssd-small: No accuracy results in PerformanceOnly mode.
[2020-06-02 09:58:23,425 main.py:302 INFO] Using config files: measurements/GeforceGTX1080Ti/ssd-small/SingleStream/config.json
[2020-06-02 09:58:23,425 __init__.py:142 INFO] Parsing config file measurements/GeforceGTX1080Ti/ssd-small/SingleStream/config.json ...
[2020-06-02 09:58:23,425 main.py:306 INFO] Processing config "GeforceGTX1080Ti_ssd-small_SingleStream"
[2020-06-02 09:58:23,425 main.py:116 INFO] Running harness for ssd-small benchmark in SingleStream scenario...
BenchmarkHarness (
{'gpu_batch_size': 1, 'gpu_single_stream_expected_latency_ns': 1621000, 'input_dtype': 'int8', 'input_format': 'chw4', 'map_path': 'data_maps/coco/val_map.txt', 'precision': 'int8', 'tensor_path': '${PREPROCESSED_DATA_DIR}/coco/val2017/SSDMobileNet/int8_chw4', 'use_graphs': False, 'system_id': 'GeforceGTX1080Ti', 'scenario': 'SingleStream', 'benchmark': 'ssd-small', 'config_name': 'GeforceGTX1080Ti_ssd-small_SingleStream', 'test_mode': 'AccuracyOnly', 'log_dir': '/work/mlperf/inference_results_v0.5/closed/NVIDIA/build/logs/2020.06.02-09.58.23'}
BenchmarkHarness )
[2020-06-02 09:58:23,427 __init__.py:42 INFO] Running command: ./build/bin/harness_default --plugins="build/plugins/NMSOptPlugin/libnmsoptplugin.so" --logfile_outdir="/work/mlperf/inference_results_v0.5/closed/NVIDIA/build/logs/2020.06.02-09.58.23/GeforceGTX1080Ti/ssd-small/SingleStream" --logfile_prefix="mlperf_log_" --test_mode="AccuracyOnly" --use_graphs=false --gpu_batch_size=1 --map_path="data_maps/coco/val_map.txt" --tensor_path="${PREPROCESSED_DATA_DIR}/coco/val2017/SSDMobileNet/int8_chw4" --gpu_engines="./build/engines/GeforceGTX1080Ti/ssd-small/SingleStream/ssd-small-SingleStream-gpu-b1-int8.plan" --performance_sample_count=256 --max_dlas=0 --single_stream_expected_latency_ns=1621000 --mlperf_conf_path="measurements/GeforceGTX1080Ti/ssd-small/SingleStream/mlperf.conf" --user_conf_path="measurements/GeforceGTX1080Ti/ssd-small/SingleStream/user.conf" --scenario SingleStream --model ssd-small --response_postprocess coco
&&&& RUNNING Default_Harness # ./build/bin/harness_default
[I] mlperf.conf path: measurements/GeforceGTX1080Ti/ssd-small/SingleStream/mlperf.conf
[I] user.conf path: measurements/GeforceGTX1080Ti/ssd-small/SingleStream/user.conf
[W] [TRT] TensorRT was linked against cuBLAS 10.2.0 but loaded cuBLAS 10.1.0
[I] Device:0: ./build/engines/GeforceGTX1080Ti/ssd-small/SingleStream/ssd-small-SingleStream-gpu-b1-int8.plan has been successfully loaded.
[W] [TRT] TensorRT was linked against cuBLAS 10.2.0 but loaded cuBLAS 10.1.0
[W] [TRT] TensorRT was linked against cuBLAS 10.2.0 but loaded cuBLAS 10.1.0
[W] [TRT] TensorRT was linked against cuBLAS 10.2.0 but loaded cuBLAS 10.1.0
[W] [TRT] TensorRT was linked against cuBLAS 10.2.0 but loaded cuBLAS 10.1.0
[I] Creating batcher thread: 0 EnableBatcherThreadPerDevice: false
Starting warmup. Running for a minimum of 5 seconds.
Finished warmup. Ran for 5.01053s.
No warnings encountered during test.
No errors encountered during test.
Device Device:0 processed:
5000 batches of size 1
Memcpy Calls: 0
PerSampleCudaMemcpy Calls: 0
BatchedCudaMemcpy Calls: 5000
&&&& PASSED Default_Harness # ./build/bin/harness_default
[2020-06-02 09:58:39,299 main.py:153 INFO] Result: Cannot find performance result. Maybe you are running in AccuracyOnly mode.
[2020-06-02 09:58:39,305 __init__.py:42 INFO] Running command: python3 build/inference/v0.5/classification_and_detection/tools/accuracy-coco.py --mlperf-accuracy-file /work/mlperf/inference_results_v0.5/closed/NVIDIA/build/logs/2020.06.02-09.58.23/GeforceGTX1080Ti/ssd-small/SingleStream/mlperf_log_accuracy.json --coco-dir /work/mlperf/inference_results_v0.5/closed/NVIDIA/build/preprocessed_data/coco --output-file build/ssd-small-results.json
loading annotations into memory...
Done (t=0.40s)
creating index...
index created!
Loading and preparing results...
DONE (t=0.15s)
creating index...
index created!
Running per image evaluation...
Evaluate annotation type *bbox*
DONE (t=13.11s).
Accumulating evaluation results...
DONE (t=2.20s).
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.229
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.346
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.253
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.017
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.164
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.525
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.207
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.260
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.261
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.021
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.189
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.598
mAP=22.908%
INT8 CHW4 Inference Only