N100

CPU

openvino@5ca032a8cb7e:/opt/intel/openvino_2023.0.0.10926$ ./samples/cpp/samples_bin/benchmark_app -m /tmp/resnet50-binary-0001.xml -d CPU
[Step 1/11] Parsing and validating input arguments
[ INFO ] Parsing input parameters
[Step 2/11] Loading OpenVINO Runtime
[ INFO ] OpenVINO:
[ INFO ] Build ................................. 2023.0.0-10926-b4452d56304-releases/2023/0
[ INFO ] 
[ INFO ] Device info:
[ INFO ] CPU
[ INFO ] Build ................................. 2023.0.0-10926-b4452d56304-releases/2023/0
[ INFO ] 
[ INFO ] 
[Step 3/11] Setting device configuration
[ WARNING ] Performance hint was not explicitly specified in command line. Device(CPU) performance hint will be set to THROUGHPUT.
[Step 4/11] Reading model files
[ INFO ] Loading model files
[ INFO ] Read model took 23.37 ms
[ INFO ] Original model I/O parameters:
[ INFO ] Network inputs:
[ INFO ]     0 (node: 0) : f32 / [N,C,H,W] / [1,3,224,224]
[ INFO ] Network outputs:
[ INFO ]     1463 (node: 1463) : f32 / [...] / [1,1000]
[Step 5/11] Resizing model to match image sizes and given batch
[Step 6/11] Configuring input of the model
[ INFO ] Model batch size: 1
[ INFO ] Network inputs:
[ INFO ]     0 (node: 0) : u8 / [N,C,H,W] / [1,3,224,224]
[ INFO ] Network outputs:
[ INFO ]     1463 (node: 1463) : f32 / [...] / [1,1000]
[Step 7/11] Loading the model to the device
[ INFO ] Compile model took 183.24 ms
[Step 8/11] Querying optimal runtime parameters
[ INFO ] Model:
[ INFO ]   NETWORK_NAME: torch-jit-export
[ INFO ]   OPTIMAL_NUMBER_OF_INFER_REQUESTS: 2
[ INFO ]   NUM_STREAMS: 2
[ INFO ]   AFFINITY: CORE
[ INFO ]   INFERENCE_NUM_THREADS: 4
[ INFO ]   PERF_COUNT: NO
[ INFO ]   INFERENCE_PRECISION_HINT: f32
[ INFO ]   PERFORMANCE_HINT: THROUGHPUT
[ INFO ]   EXECUTION_MODE_HINT: PERFORMANCE
[ INFO ]   PERFORMANCE_HINT_NUM_REQUESTS: 0
[ INFO ]   ENABLE_CPU_PINNING: YES
[ INFO ]   SCHEDULING_CORE_TYPE: ANY_CORE
[ INFO ]   ENABLE_HYPER_THREADING: YES
[ INFO ]   EXECUTION_DEVICES: CPU
[Step 9/11] Creating infer requests and preparing input tensors
[ WARNING ] No input files were given: all inputs will be filled with random values!
[ INFO ] Test Config 0
[ INFO ] 0  ([N,C,H,W], u8, [1,3,224,224], static): random (image/numpy array is expected)
[Step 10/11] Measuring performance (Start inference asynchronously, 2 inference requests, limits: 60000 ms duration)
[ INFO ] Benchmarking in inference only mode (inputs filling are not included in measurement loop).
[ INFO ] First inference took 34.82 ms
[Step 11/11] Dumping statistics report
[ INFO ] Execution Devices: [ CPU ]
[ INFO ] Count:               3136 iterations
[ INFO ] Duration:            60049.55 ms
[ INFO ] Latency:
[ INFO ]    Median:           37.45 ms
[ INFO ]    Average:          38.28 ms
[ INFO ]    Min:              34.27 ms
[ INFO ]    Max:              81.46 ms
[ INFO ] Throughput:          52.22 FPS

GPU

openvino@5ca032a8cb7e:/opt/intel/openvino_2023.0.0.10926$ ./samples/cpp/samples_bin/benchmark_app -m /tmp/resnet50-binary-0001.xml -d GPU
[Step 1/11] Parsing and validating input arguments
[ INFO ] Parsing input parameters
[Step 2/11] Loading OpenVINO Runtime
[ INFO ] OpenVINO:
[ INFO ] Build ................................. 2023.0.0-10926-b4452d56304-releases/2023/0
[ INFO ] 
[ INFO ] Device info:
[ INFO ] GPU
[ INFO ] Build ................................. 2023.0.0-10926-b4452d56304-releases/2023/0
[ INFO ] 
[ INFO ] 
[Step 3/11] Setting device configuration
[ WARNING ] Performance hint was not explicitly specified in command line. Device(GPU) performance hint will be set to THROUGHPUT.
[Step 4/11] Reading model files
[ INFO ] Loading model files
[ INFO ] Read model took 23.11 ms
[ INFO ] Original model I/O parameters:
[ INFO ] Network inputs:
[ INFO ]     0 (node: 0) : f32 / [N,C,H,W] / [1,3,224,224]
[ INFO ] Network outputs:
[ INFO ]     1463 (node: 1463) : f32 / [...] / [1,1000]
[Step 5/11] Resizing model to match image sizes and given batch
[Step 6/11] Configuring input of the model
[ INFO ] Model batch size: 1
[ INFO ] Network inputs:
[ INFO ]     0 (node: 0) : u8 / [N,C,H,W] / [1,3,224,224]
[ INFO ] Network outputs:
[ INFO ]     1463 (node: 1463) : f32 / [...] / [1,1000]
[Step 7/11] Loading the model to the device
[ INFO ] Compile model took 4854.81 ms
[Step 8/11] Querying optimal runtime parameters
[ INFO ] Model:
[ INFO ]   OPTIMAL_NUMBER_OF_INFER_REQUESTS: 32
[ INFO ]   NETWORK_NAME: torch-jit-export
[ INFO ]   EXECUTION_DEVICES: GPU.0
[ INFO ]   AUTO_BATCH_TIMEOUT: 1000
[ INFO ]   LOADED_FROM_CACHE: NO
[Step 9/11] Creating infer requests and preparing input tensors
[ WARNING ] No input files were given: all inputs will be filled with random values!
[ INFO ] Test Config 0
[ INFO ] 0  ([N,C,H,W], u8, [1,3,224,224], static): random (image/numpy array is expected)
[Step 10/11] Measuring performance (Start inference asynchronously, 32 inference requests, limits: 60000 ms duration)
[ INFO ] Benchmarking in inference only mode (inputs filling are not included in measurement loop).
[ INFO ] First inference took 979.00 ms
[Step 11/11] Dumping statistics report
[ INFO ] Execution Devices: [ GPU.0 ]
[ INFO ] Count:               6016 iterations
[ INFO ] Duration:            60524.62 ms
[ INFO ] Latency:
[ INFO ]    Median:           319.04 ms
[ INFO ]    Average:          321.27 ms
[ INFO ]    Min:              85.62 ms
[ INFO ]    Max:              331.83 ms
[ INFO ] Throughput:          99.40 FPS

MULTI

openvino@5ca032a8cb7e:/opt/intel/openvino_2023.0.0.10926$ ./samples/cpp/samples_bin/benchmark_app -m /tmp/resnet50-binary-0001.xml -d MULTI:GPU,CPU
[Step 1/11] Parsing and validating input arguments
[ INFO ] Parsing input parameters
[Step 2/11] Loading OpenVINO Runtime
[ INFO ] OpenVINO:
[ INFO ] Build ................................. 2023.0.0-10926-b4452d56304-releases/2023/0
[ INFO ] 
[ INFO ] Device info:
[ INFO ] CPU
[ INFO ] Build ................................. 2023.0.0-10926-b4452d56304-releases/2023/0
[ INFO ] 
[ INFO ] GPU
[ INFO ] Build ................................. 2023.0.0-10926-b4452d56304-releases/2023/0
[ INFO ] 
[ INFO ] MULTI
[ INFO ] Build ................................. 2023.0.0-10926-b4452d56304-releases/2023/0
[ INFO ] 
[ INFO ] 
[Step 3/11] Setting device configuration
[ WARNING ] Performance hint was not explicitly specified in command line. Device(MULTI) performance hint will be set to THROUGHPUT.
[Step 4/11] Reading model files
[ INFO ] Loading model files
[ INFO ] Read model took 22.80 ms
[ INFO ] Original model I/O parameters:
[ INFO ] Network inputs:
[ INFO ]     0 (node: 0) : f32 / [N,C,H,W] / [1,3,224,224]
[ INFO ] Network outputs:
[ INFO ]     1463 (node: 1463) : f32 / [...] / [1,1000]
[Step 5/11] Resizing model to match image sizes and given batch
[Step 6/11] Configuring input of the model
[ INFO ] Model batch size: 1
[ INFO ] Network inputs:
[ INFO ]     0 (node: 0) : u8 / [N,C,H,W] / [1,3,224,224]
[ INFO ] Network outputs:
[ INFO ]     1463 (node: 1463) : f32 / [...] / [1,1000]
[Step 7/11] Loading the model to the device
[ INFO ] Compile model took 5027.79 ms
[Step 8/11] Querying optimal runtime parameters
[ INFO ] Model:
[ INFO ]   PERFORMANCE_HINT: THROUGHPUT
[ INFO ]   NETWORK_NAME: torch-jit-export
[ INFO ]   OPTIMAL_NUMBER_OF_INFER_REQUESTS: 34
[ INFO ]   MODEL_PRIORITY: MEDIUM
[ INFO ]   MULTI_DEVICE_PRIORITIES: GPU,CPU
[ INFO ]   CPU: 
[ INFO ]     CPU_BIND_THREAD: YES
[ INFO ]     CPU_THREADS_NUM: 0
[ INFO ]     CPU_THROUGHPUT_STREAMS: 2
[ INFO ]     DEVICE_ID: 
[ INFO ]     DUMP_EXEC_GRAPH_AS_DOT: 
[ INFO ]     DYN_BATCH_ENABLED: NO
[ INFO ]     DYN_BATCH_LIMIT: 0
[ INFO ]     ENFORCE_BF16: NO
[ INFO ]     EXCLUSIVE_ASYNC_REQUESTS: NO
[ INFO ]     NETWORK_NAME: torch-jit-export
[ INFO ]     OPTIMAL_NUMBER_OF_INFER_REQUESTS: 2
[ INFO ]     PERFORMANCE_HINT: THROUGHPUT
[ INFO ]     PERFORMANCE_HINT_NUM_REQUESTS: 0
[ INFO ]     PERF_COUNT: NO
[ INFO ]   GPU: 
[ INFO ]     AUTO_BATCH_TIMEOUT: 1000
[ INFO ]     EXECUTION_DEVICES: GPU.0
[ INFO ]     NETWORK_NAME: torch-jit-export
[ INFO ]     OPTIMAL_NUMBER_OF_INFER_REQUESTS: 32
[ INFO ]   EXECUTION_DEVICES: GPU CPU
[Step 9/11] Creating infer requests and preparing input tensors
[ WARNING ] No input files were given: all inputs will be filled with random values!
[ INFO ] Test Config 0
[ INFO ] 0  ([N,C,H,W], u8, [1,3,224,224], static): random (image/numpy array is expected)
[Step 10/11] Measuring performance (Start inference asynchronously, 34 inference requests, limits: 60000 ms duration)
[ INFO ] Benchmarking in inference only mode (inputs filling are not included in measurement loop).
[ INFO ] First inference took 783.98 ms
[Step 11/11] Dumping statistics report
[ INFO ] Execution Devices: [ GPU CPU ]
[ INFO ] Count:               7820 iterations
[ INFO ] Duration:            61118.62 ms
[ INFO ] Throughput:          127.95 FPS

i7 1165G7

CPU

openvino@9594bb13b1f6:/opt/intel/openvino_2023.0.0.10926$ ./samples/cpp/samples_bin/benchmark_app -m /tmp/resnet50-binary-0001.xml -d CPU
[Step 1/11] Parsing and validating input arguments
[ INFO ] Parsing input parameters
[Step 2/11] Loading OpenVINO Runtime
[ INFO ] OpenVINO:
[ INFO ] Build ................................. 2023.0.0-10926-b4452d56304-releases/2023/0
[ INFO ] 
[ INFO ] Device info:
[ INFO ] CPU
[ INFO ] Build ................................. 2023.0.0-10926-b4452d56304-releases/2023/0
[ INFO ] 
[ INFO ] 
[Step 3/11] Setting device configuration
[ WARNING ] Performance hint was not explicitly specified in command line. Device(CPU) performance hint will be set to THROUGHPUT.
[Step 4/11] Reading model files
[ INFO ] Loading model files
[ INFO ] Read model took 14.82 ms
[ INFO ] Original model I/O parameters:
[ INFO ] Network inputs:
[ INFO ]     0 (node: 0) : f32 / [N,C,H,W] / [1,3,224,224]
[ INFO ] Network outputs:
[ INFO ]     1463 (node: 1463) : f32 / [...] / [1,1000]
[Step 5/11] Resizing model to match image sizes and given batch
[Step 6/11] Configuring input of the model
[ INFO ] Model batch size: 1
[ INFO ] Network inputs:
[ INFO ]     0 (node: 0) : u8 / [N,C,H,W] / [1,3,224,224]
[ INFO ] Network outputs:
[ INFO ]     1463 (node: 1463) : f32 / [...] / [1,1000]
[Step 7/11] Loading the model to the device
[ INFO ] Compile model took 145.95 ms
[Step 8/11] Querying optimal runtime parameters
[ INFO ] Model:
[ INFO ]   NETWORK_NAME: torch-jit-export
[ INFO ]   OPTIMAL_NUMBER_OF_INFER_REQUESTS: 4
[ INFO ]   NUM_STREAMS: 4
[ INFO ]   AFFINITY: CORE
[ INFO ]   INFERENCE_NUM_THREADS: 8
[ INFO ]   PERF_COUNT: NO
[ INFO ]   INFERENCE_PRECISION_HINT: f32
[ INFO ]   PERFORMANCE_HINT: THROUGHPUT
[ INFO ]   EXECUTION_MODE_HINT: PERFORMANCE
[ INFO ]   PERFORMANCE_HINT_NUM_REQUESTS: 0
[ INFO ]   ENABLE_CPU_PINNING: YES
[ INFO ]   SCHEDULING_CORE_TYPE: ANY_CORE
[ INFO ]   ENABLE_HYPER_THREADING: YES
[ INFO ]   EXECUTION_DEVICES: CPU
[Step 9/11] Creating infer requests and preparing input tensors
[ WARNING ] No input files were given: all inputs will be filled with random values!
[ INFO ] Test Config 0
[ INFO ] 0  ([N,C,H,W], u8, [1,3,224,224], static): random (image/numpy array is expected)
[Step 10/11] Measuring performance (Start inference asynchronously, 4 inference requests, limits: 60000 ms duration)
[ INFO ] Benchmarking in inference only mode (inputs filling are not included in measurement loop).
[ INFO ] First inference took 8.27 ms
[Step 11/11] Dumping statistics report
[ INFO ] Execution Devices: [ CPU ]
[ INFO ] Count:               16604 iterations
[ INFO ] Duration:            60015.06 ms
[ INFO ] Latency:
[ INFO ]    Median:           14.59 ms
[ INFO ]    Average:          14.44 ms
[ INFO ]    Min:              8.19 ms
[ INFO ]    Max:              35.32 ms
[ INFO ] Throughput:          276.66 FPS

GPU

openvino@9594bb13b1f6:/opt/intel/openvino_2023.0.0.10926$ ./samples/cpp/samples_bin/benchmark_app -m /tmp/resnet50-binary-0001.xml -d GPU
[Step 1/11] Parsing and validating input arguments
[ INFO ] Parsing input parameters
[Step 2/11] Loading OpenVINO Runtime
[ INFO ] OpenVINO:
[ INFO ] Build ................................. 2023.0.0-10926-b4452d56304-releases/2023/0
[ INFO ] 
[ INFO ] Device info:
[ INFO ] GPU
[ INFO ] Build ................................. 2023.0.0-10926-b4452d56304-releases/2023/0
[ INFO ] 
[ INFO ] 
[Step 3/11] Setting device configuration
[ WARNING ] Performance hint was not explicitly specified in command line. Device(GPU) performance hint will be set to THROUGHPUT.
[Step 4/11] Reading model files
[ INFO ] Loading model files
[ INFO ] Read model took 13.30 ms
[ INFO ] Original model I/O parameters:
[ INFO ] Network inputs:
[ INFO ]     0 (node: 0) : f32 / [N,C,H,W] / [1,3,224,224]
[ INFO ] Network outputs:
[ INFO ]     1463 (node: 1463) : f32 / [...] / [1,1000]
[Step 5/11] Resizing model to match image sizes and given batch
[Step 6/11] Configuring input of the model
[ INFO ] Model batch size: 1
[ INFO ] Network inputs:
[ INFO ]     0 (node: 0) : u8 / [N,C,H,W] / [1,3,224,224]
[ INFO ] Network outputs:
[ INFO ]     1463 (node: 1463) : f32 / [...] / [1,1000]
[Step 7/11] Loading the model to the device
[ INFO ] Compile model took 2835.35 ms
[Step 8/11] Querying optimal runtime parameters
[ INFO ] Model:
[ INFO ]   OPTIMAL_NUMBER_OF_INFER_REQUESTS: 64
[ INFO ]   NETWORK_NAME: torch-jit-export
[ INFO ]   EXECUTION_DEVICES: GPU.0
[ INFO ]   AUTO_BATCH_TIMEOUT: 1000
[ INFO ]   LOADED_FROM_CACHE: NO
[Step 9/11] Creating infer requests and preparing input tensors
[ WARNING ] No input files were given: all inputs will be filled with random values!
[ INFO ] Test Config 0
[ INFO ] 0  ([N,C,H,W], u8, [1,3,224,224], static): random (image/numpy array is expected)
[Step 10/11] Measuring performance (Start inference asynchronously, 64 inference requests, limits: 60000 ms duration)
[ INFO ] Benchmarking in inference only mode (inputs filling are not included in measurement loop).
[ INFO ] First inference took 985.15 ms
[Step 11/11] Dumping statistics report
[ INFO ] Execution Devices: [ GPU.0 ]
[ INFO ] Count:               24832 iterations
[ INFO ] Duration:            60188.07 ms
[ INFO ] Latency:
[ INFO ]    Median:           154.87 ms
[ INFO ]    Average:          154.95 ms
[ INFO ]    Min:              58.05 ms
[ INFO ]    Max:              158.39 ms
[ INFO ] Throughput:          412.57 FPS

MULTI

openvino@9594bb13b1f6:/opt/intel/openvino_2023.0.0.10926$ ./samples/cpp/samples_bin/benchmark_app -m /tmp/resnet50-binary-0001.xml -d MULTI:GPU,CPU
[Step 1/11] Parsing and validating input arguments
[ INFO ] Parsing input parameters
[Step 2/11] Loading OpenVINO Runtime
[ INFO ] OpenVINO:
[ INFO ] Build ................................. 2023.0.0-10926-b4452d56304-releases/2023/0
[ INFO ] 
[ INFO ] Device info:
[ INFO ] CPU
[ INFO ] Build ................................. 2023.0.0-10926-b4452d56304-releases/2023/0
[ INFO ] 
[ INFO ] GPU
[ INFO ] Build ................................. 2023.0.0-10926-b4452d56304-releases/2023/0
[ INFO ] 
[ INFO ] MULTI
[ INFO ] Build ................................. 2023.0.0-10926-b4452d56304-releases/2023/0
[ INFO ] 
[ INFO ] 
[Step 3/11] Setting device configuration
[ WARNING ] Performance hint was not explicitly specified in command line. Device(MULTI) performance hint will be set to THROUGHPUT.
[Step 4/11] Reading model files
[ INFO ] Loading model files
[ INFO ] Read model took 13.32 ms
[ INFO ] Original model I/O parameters:
[ INFO ] Network inputs:
[ INFO ]     0 (node: 0) : f32 / [N,C,H,W] / [1,3,224,224]
[ INFO ] Network outputs:
[ INFO ]     1463 (node: 1463) : f32 / [...] / [1,1000]
[Step 5/11] Resizing model to match image sizes and given batch
[Step 6/11] Configuring input of the model
[ INFO ] Model batch size: 1
[ INFO ] Network inputs:
[ INFO ]     0 (node: 0) : u8 / [N,C,H,W] / [1,3,224,224]
[ INFO ] Network outputs:
[ INFO ]     1463 (node: 1463) : f32 / [...] / [1,1000]
[Step 7/11] Loading the model to the device
[ INFO ] Compile model took 2946.17 ms
[Step 8/11] Querying optimal runtime parameters
[ INFO ] Model:
[ INFO ]   PERFORMANCE_HINT: THROUGHPUT
[ INFO ]   NETWORK_NAME: torch-jit-export
[ INFO ]   OPTIMAL_NUMBER_OF_INFER_REQUESTS: 68
[ INFO ]   MODEL_PRIORITY: MEDIUM
[ INFO ]   MULTI_DEVICE_PRIORITIES: GPU,CPU
[ INFO ]   CPU: 
[ INFO ]     CPU_BIND_THREAD: YES
[ INFO ]     CPU_THREADS_NUM: 0
[ INFO ]     CPU_THROUGHPUT_STREAMS: 4
[ INFO ]     DEVICE_ID: 
[ INFO ]     DUMP_EXEC_GRAPH_AS_DOT: 
[ INFO ]     DYN_BATCH_ENABLED: NO
[ INFO ]     DYN_BATCH_LIMIT: 0
[ INFO ]     ENFORCE_BF16: NO
[ INFO ]     EXCLUSIVE_ASYNC_REQUESTS: NO
[ INFO ]     NETWORK_NAME: torch-jit-export
[ INFO ]     OPTIMAL_NUMBER_OF_INFER_REQUESTS: 4
[ INFO ]     PERFORMANCE_HINT: THROUGHPUT
[ INFO ]     PERFORMANCE_HINT_NUM_REQUESTS: 0
[ INFO ]     PERF_COUNT: NO
[ INFO ]   GPU: 
[ INFO ]     AUTO_BATCH_TIMEOUT: 1000
[ INFO ]     EXECUTION_DEVICES: GPU.0
[ INFO ]     NETWORK_NAME: torch-jit-export
[ INFO ]     OPTIMAL_NUMBER_OF_INFER_REQUESTS: 64
[ INFO ]   EXECUTION_DEVICES: GPU CPU
[Step 9/11] Creating infer requests and preparing input tensors
[ WARNING ] No input files were given: all inputs will be filled with random values!
[ INFO ] Test Config 0
[ INFO ] 0  ([N,C,H,W], u8, [1,3,224,224], static): random (image/numpy array is expected)
[Step 10/11] Measuring performance (Start inference asynchronously, 68 inference requests, limits: 60000 ms duration)
[ INFO ] Benchmarking in inference only mode (inputs filling are not included in measurement loop).
[ INFO ] First inference took 831.04 ms
[Step 11/11] Dumping statistics report
[ INFO ] Execution Devices: [ GPU CPU ]
[ INFO ] Count:               32096 iterations
[ INFO ] Duration:            61114.52 ms
[ INFO ] Throughput:          525.18 FPS

i7 10700K + A380

CPU

openvino@0bf0cca613c1:/opt/intel/openvino_2023.3.0.13775$ ./samples/cpp/samples_bin/samples_bin/benchmark_app -m /tmp/resnet50-binary-0001.xml -d CPU          
[Step 1/11] Parsing and validating input arguments
[ INFO ] Parsing input parameters
[Step 2/11] Loading OpenVINO Runtime
[ INFO ] OpenVINO:
[ INFO ] Build ................................. 2023.3.0-13775-ceeafaf64f3-releases/2023/3
[ INFO ] 
[ INFO ] Device info:
[ INFO ] CPU
[ INFO ] Build ................................. 2023.3.0-13775-ceeafaf64f3-releases/2023/3
[ INFO ] 
[ INFO ] 
[Step 3/11] Setting device configuration
[ WARNING ] Performance hint was not explicitly specified in command line. Device(CPU) performance hint will be set to THROUGHPUT.
[Step 4/11] Reading model files
[ INFO ] Loading model files
[ INFO ] Read model took 16.88 ms
[ INFO ] Original model I/O parameters:
[ INFO ] Network inputs:
[ INFO ]     0 (node: 0) : f32 / [N,C,H,W] / [1,3,224,224]
[ INFO ] Network outputs:
[ INFO ]     1463 (node: 1463) : f32 / [...] / [1,1000]
[Step 5/11] Resizing model to match image sizes and given batch
[Step 6/11] Configuring input of the model
[ INFO ] Model batch size: 1
[ INFO ] Network inputs:
[ INFO ]     0 (node: 0) : u8 / [N,C,H,W] / [1,3,224,224]
[ INFO ] Network outputs:
[ INFO ]     1463 (node: 1463) : f32 / [...] / [1,1000]
[Step 7/11] Loading the model to the device
[ INFO ] Compile model took 150.30 ms
[Step 8/11] Querying optimal runtime parameters
[ INFO ] Model:
[ INFO ]   NETWORK_NAME: torch-jit-export
[ INFO ]   OPTIMAL_NUMBER_OF_INFER_REQUESTS: 4
[ INFO ]   NUM_STREAMS: 4
[ INFO ]   AFFINITY: CORE
[ INFO ]   INFERENCE_NUM_THREADS: 16
[ INFO ]   PERF_COUNT: NO
[ INFO ]   INFERENCE_PRECISION_HINT: f32
[ INFO ]   PERFORMANCE_HINT: THROUGHPUT
[ INFO ]   EXECUTION_MODE_HINT: PERFORMANCE
[ INFO ]   PERFORMANCE_HINT_NUM_REQUESTS: 0
[ INFO ]   ENABLE_CPU_PINNING: YES
[ INFO ]   SCHEDULING_CORE_TYPE: ANY_CORE
[ INFO ]   ENABLE_HYPER_THREADING: YES
[ INFO ]   EXECUTION_DEVICES: CPU
[ INFO ]   CPU_DENORMALS_OPTIMIZATION: NO
[ INFO ]   CPU_SPARSE_WEIGHTS_DECOMPRESSION_RATE: 1
[Step 9/11] Creating infer requests and preparing input tensors
[ WARNING ] No input files were given: all inputs will be filled with random values!
[ INFO ] Test Config 0
[ INFO ] 0  ([N,C,H,W], u8, [1,3,224,224], static): random (image/numpy array is expected)
[Step 10/11] Measuring performance (Start inference asynchronously, 4 inference requests, limits: 60000 ms duration)
[ INFO ] Benchmarking in inference only mode (inputs filling are not included in measurement loop).
[ INFO ] First inference took 8.26 ms
[Step 11/11] Dumping statistics report
[ INFO ] Execution Devices: [ CPU ]
[ INFO ] Count:               12204 iterations
[ INFO ] Duration:            60034.62 ms
[ INFO ] Latency:
[ INFO ]    Median:           19.72 ms
[ INFO ]    Average:          19.66 ms
[ INFO ]    Min:              15.66 ms
[ INFO ]    Max:              31.56 ms
[ INFO ] Throughput:          203.28 FPS

GPU: iGPU

openvino@e25a1984b7fd:/opt/intel/openvino_2023.3.0.13775$ ./samples/cpp/samples_bin/samples_bin/benchmark_app -m /tmp/resnet50-binary-0001.xml -d GPU
[Step 1/11] Parsing and validating input arguments
[ INFO ] Parsing input parameters
[Step 2/11] Loading OpenVINO Runtime
[ INFO ] OpenVINO:
[ INFO ] Build ................................. 2023.3.0-13775-ceeafaf64f3-releases/2023/3
[ INFO ] 
[ INFO ] Device info:
[ INFO ] GPU
[ INFO ] Build ................................. 2023.3.0-13775-ceeafaf64f3-releases/2023/3
[ INFO ] 
[ INFO ] 
[Step 3/11] Setting device configuration
[ WARNING ] Performance hint was not explicitly specified in command line. Device(GPU) performance hint will be set to THROUGHPUT.
[Step 4/11] Reading model files
[ INFO ] Loading model files
[ INFO ] Read model took 51.93 ms
[ INFO ] Original model I/O parameters:
[ INFO ] Network inputs:
[ INFO ]     0 (node: 0) : f32 / [N,C,H,W] / [1,3,224,224]
[ INFO ] Network outputs:
[ INFO ]     1463 (node: 1463) : f32 / [...] / [1,1000]
[Step 5/11] Resizing model to match image sizes and given batch
[Step 6/11] Configuring input of the model
[ INFO ] Model batch size: 1
[ INFO ] Network inputs:
[ INFO ]     0 (node: 0) : u8 / [N,C,H,W] / [1,3,224,224]
[ INFO ] Network outputs:
[ INFO ]     1463 (node: 1463) : f32 / [...] / [1,1000]
[Step 7/11] Loading the model to the device
[ INFO ] Compile model took 5838.75 ms
[Step 8/11] Querying optimal runtime parameters
[ INFO ] Model:
[ INFO ]   OPTIMAL_NUMBER_OF_INFER_REQUESTS: 16
[ INFO ]   SUPPORTED_METRICS: OPTIMAL_NUMBER_OF_INFER_REQUESTS SUPPORTED_METRICS NETWORK_NAME SUPPORTED_CONFIG_KEYS EXECUTION_DEVICES
[ INFO ]   NETWORK_NAME: torch-jit-export
[ INFO ]   SUPPORTED_CONFIG_KEYS: AUTO_BATCH_TIMEOUT
[ INFO ]   EXECUTION_DEVICES: OCL_GPU.0
[ INFO ]   AUTO_BATCH_TIMEOUT: 1000
[Step 9/11] Creating infer requests and preparing input tensors
[ WARNING ] No input files were given: all inputs will be filled with random values!
[ INFO ] Test Config 0
[ INFO ] 0  ([N,C,H,W], u8, [1,3,224,224], static): random (image/numpy array is expected)
[Step 10/11] Measuring performance (Start inference asynchronously, 16 inference requests, limits: 60000 ms duration)
[ INFO ] Benchmarking in inference only mode (inputs filling are not included in measurement loop).
[ INFO ] First inference took 1152.58 ms
[Step 11/11] Dumping statistics report
[ INFO ] Execution Devices: [ OCL_GPU.0 ]
[ INFO ] Count:               2352 iterations
[ INFO ] Duration:            60578.62 ms
[ INFO ] Latency:
[ INFO ]    Median:           414.39 ms
[ INFO ]    Average:          411.05 ms
[ INFO ]    Min:              212.83 ms
[ INFO ]    Max:              420.91 ms
[ INFO ] Throughput:          38.83 FPS

GPU: A380

openvino@a649c849386e:/opt/intel/openvino_2023.0.0.10926$ ./samples/cpp/samples_bin/benchmark_app -m /tmp/resnet50-binary-0001.xml -d GPU
[Step 1/11] Parsing and validating input arguments
[ INFO ] Parsing input parameters
[Step 2/11] Loading OpenVINO Runtime
[ INFO ] OpenVINO:
[ INFO ] Build ................................. 2023.0.0-10926-b4452d56304-releases/2023/0
[ INFO ] 
[ INFO ] Device info:
[ INFO ] GPU
[ INFO ] Build ................................. 2023.0.0-10926-b4452d56304-releases/2023/0
[ INFO ] 
[ INFO ] 
[Step 3/11] Setting device configuration
[ WARNING ] Performance hint was not explicitly specified in command line. Device(GPU) performance hint will be set to THROUGHPUT.
[Step 4/11] Reading model files
[ INFO ] Loading model files
[ INFO ] Read model took 17.53 ms
[ INFO ] Original model I/O parameters:
[ INFO ] Network inputs:
[ INFO ]     0 (node: 0) : f32 / [N,C,H,W] / [1,3,224,224]
[ INFO ] Network outputs:
[ INFO ]     1463 (node: 1463) : f32 / [...] / [1,1000]
[Step 5/11] Resizing model to match image sizes and given batch
[Step 6/11] Configuring input of the model
[ INFO ] Model batch size: 1
[ INFO ] Network inputs:
[ INFO ]     0 (node: 0) : u8 / [N,C,H,W] / [1,3,224,224]
[ INFO ] Network outputs:
[ INFO ]     1463 (node: 1463) : f32 / [...] / [1,1000]
[Step 7/11] Loading the model to the device
[ INFO ] Compile model took 3670.05 ms
[Step 8/11] Querying optimal runtime parameters
[ INFO ] Model:
[ INFO ]   OPTIMAL_NUMBER_OF_INFER_REQUESTS: 4
[ INFO ]   NETWORK_NAME: torch-jit-export
[ INFO ]   EXECUTION_DEVICES: GPU.0
[ INFO ]   AUTO_BATCH_TIMEOUT: 1000
[ INFO ]   LOADED_FROM_CACHE: NO
[Step 9/11] Creating infer requests and preparing input tensors
[ WARNING ] No input files were given: all inputs will be filled with random values!
[ INFO ] Test Config 0
[ INFO ] 0  ([N,C,H,W], u8, [1,3,224,224], static): random (image/numpy array is expected)
[Step 10/11] Measuring performance (Start inference asynchronously, 4 inference requests, limits: 60000 ms duration)
[ INFO ] Benchmarking in inference only mode (inputs filling are not included in measurement loop).
[ INFO ] First inference took 7.45 ms
[Step 11/11] Dumping statistics report
[ INFO ] Execution Devices: [ GPU.0 ]
[ INFO ] Count:               25344 iterations
[ INFO ] Duration:            60017.17 ms
[ INFO ] Latency:
[ INFO ]    Median:           9.47 ms
[ INFO ]    Average:          9.47 ms
[ INFO ]    Min:              4.31 ms
[ INFO ]    Max:              14.91 ms
[ INFO ] Throughput:          422.28 FPS

MULTI: CPU + A380

openvino@0bf0cca613c1:/opt/intel/openvino_2023.3.0.13775$ ./samples/cpp/samples_bin/samples_bin/benchmark_app -m /tmp/resnet50-binary-0001.xml -d MULTI:GPU,CPU
[Step 1/11] Parsing and validating input arguments
[ INFO ] Parsing input parameters
[Step 2/11] Loading OpenVINO Runtime
[ INFO ] OpenVINO:
[ INFO ] Build ................................. 2023.3.0-13775-ceeafaf64f3-releases/2023/3
[ INFO ] 
[ INFO ] Device info:
[ INFO ] CPU
[ INFO ] Build ................................. 2023.3.0-13775-ceeafaf64f3-releases/2023/3
[ INFO ] 
[ INFO ] GPU
[ INFO ] Build ................................. 2023.3.0-13775-ceeafaf64f3-releases/2023/3
[ INFO ] 
[ INFO ] MULTI
[ INFO ] Build ................................. 2023.3.0-13775-ceeafaf64f3-releases/2023/3
[ INFO ] 
[ INFO ] 
[Step 3/11] Setting device configuration
[ WARNING ] Performance hint was not explicitly specified in command line. Device(MULTI) performance hint will be set to THROUGHPUT.
[Step 4/11] Reading model files
[ INFO ] Loading model files
[ INFO ] Read model took 35.56 ms
[ INFO ] Original model I/O parameters:
[ INFO ] Network inputs:
[ INFO ]     0 (node: 0) : f32 / [N,C,H,W] / [1,3,224,224]
[ INFO ] Network outputs:
[ INFO ]     1463 (node: 1463) : f32 / [...] / [1,1000]
[Step 5/11] Resizing model to match image sizes and given batch
[Step 6/11] Configuring input of the model
[ INFO ] Model batch size: 1
[ INFO ] Network inputs:
[ INFO ]     0 (node: 0) : u8 / [N,C,H,W] / [1,3,224,224]
[ INFO ] Network outputs:
[ INFO ]     1463 (node: 1463) : f32 / [...] / [1,1000]
[Step 7/11] Loading the model to the device
[ INFO ] Compile model took 10122.19 ms
[Step 8/11] Querying optimal runtime parameters
[ INFO ] Model:
[ INFO ]   NETWORK_NAME: torch-jit-export
[ INFO ]   EXECUTION_DEVICES: GPU CPU
[ INFO ]   PERFORMANCE_HINT: THROUGHPUT
[ INFO ]   OPTIMAL_NUMBER_OF_INFER_REQUESTS: 132
[ INFO ]   CPU: 
[ INFO ]     AFFINITY: CORE
[ INFO ]     CPU_DENORMALS_OPTIMIZATION: NO
[ INFO ]     CPU_SPARSE_WEIGHTS_DECOMPRESSION_RATE: 1
[ INFO ]     ENABLE_CPU_PINNING: YES
[ INFO ]     ENABLE_HYPER_THREADING: YES
[ INFO ]     EXECUTION_DEVICES: CPU
[ INFO ]     EXECUTION_MODE_HINT: PERFORMANCE
[ INFO ]     INFERENCE_NUM_THREADS: 16
[ INFO ]     INFERENCE_PRECISION_HINT: f32
[ INFO ]     NETWORK_NAME: torch-jit-export
[ INFO ]     NUM_STREAMS: 4
[ INFO ]     OPTIMAL_NUMBER_OF_INFER_REQUESTS: 4
[ INFO ]     PERFORMANCE_HINT: THROUGHPUT
[ INFO ]     PERFORMANCE_HINT_NUM_REQUESTS: 0
[ INFO ]     PERF_COUNT: NO
[ INFO ]     SCHEDULING_CORE_TYPE: ANY_CORE
[ INFO ]   GPU: 
[ INFO ]     AUTO_BATCH_TIMEOUT: 1000
[ INFO ]     EXECUTION_DEVICES: OCL_GPU.0
[ INFO ]     NETWORK_NAME: torch-jit-export
[ INFO ]     OPTIMAL_NUMBER_OF_INFER_REQUESTS: 128
[ INFO ]     SUPPORTED_CONFIG_KEYS: AUTO_BATCH_TIMEOUT
[ INFO ]   MODEL_PRIORITY: MEDIUM
[ INFO ]   LOADED_FROM_CACHE: NO
[ INFO ]   SCHEDULE_POLICY: DEVICE_PRIORITY
[ INFO ]   MULTI_DEVICE_PRIORITIES: GPU,CPU
[Step 9/11] Creating infer requests and preparing input tensors
[ WARNING ] No input files were given: all inputs will be filled with random values!
[ INFO ] Test Config 0
[ INFO ] 0  ([N,C,H,W], u8, [1,3,224,224], static): random (image/numpy array is expected)
[Step 10/11] Measuring performance (Start inference asynchronously, 132 inference requests, limits: 60000 ms duration)
[ INFO ] Benchmarking in inference only mode (inputs filling are not included in measurement loop).
[ INFO ] First inference took 763.51 ms
[Step 11/11] Dumping statistics report
[ INFO ] Execution Devices: [ GPU CPU ]
[ INFO ] Count:               38412 iterations
[ INFO ] Duration:            61324.81 ms
[ INFO ] Throughput:          626.37 FPS

标签: none

添加新评论