From 8168c919782a080490482f37100acdee94c2f9d2 Mon Sep 17 00:00:00 2001 From: sfatimar <64512376+sfatimar@users.noreply.github.com> Date: Wed, 25 Nov 2020 22:20:01 +0530 Subject: [PATCH] Sahar/fix documentation shared lib (#5926) * Update OpenVINO-ExecutionProvider.Md update openvino-executionprovider.md for shared library * Update Build.md updated --build_shared_lib flag for building openvino shared provider lib * Update Dockerfile.openvino building for shared library with the new changes for openvino shared lib * Revert "Update Build.md" This reverts commit c9cf5fee76be7fdc10cadf07259f1d4ed5b45b93. * Revert "Update Dockerfile.openvino " This reverts commit e1624e4f93a4cfb425b6f21d7fb71b299a146740. * Update OpenVINO-ExecutionProvider.md fix documentation to the shared library Co-authored-by: sfatimar --- .../OpenVINO-ExecutionProvider.md | 33 +++++++------------ 1 file changed, 12 insertions(+), 21 deletions(-) diff --git a/docs/execution_providers/OpenVINO-ExecutionProvider.md b/docs/execution_providers/OpenVINO-ExecutionProvider.md index 4359ba3639..8f092bcd34 100644 --- a/docs/execution_providers/OpenVINO-ExecutionProvider.md +++ b/docs/execution_providers/OpenVINO-ExecutionProvider.md @@ -20,14 +20,15 @@ session.set_providers(['OpenVINOExecutionProvider'], [{Key1 : Value1, Key2 : Val *Note that this causes the InferenceSession to be re-initialized, which may cause model recompilation and hardware re-initialization* ### C/C++ API -All the options (key-value pairs) need to be concantenated into a string as shown below and passed to OrtSessionOptionsAppendExecutionProviderEx_OpenVINO() API as shown below:- +All the options shown below are passed to SessionOptionsAppendExecutionProvider_OpenVINO() API and populated in the struct OrtOpenVINOProviderOptions in an example shown below, for example for CPU device type:- ``` -std::string settings_str; -settings_str.append("Key1|Value1\n"); -settings_str.append("Key2|Value2\n"); -settings_str.append("Key3|Value3\n"); -Ort::ThrowOnError(OrtSessionOptionsAppendExecutionProviderEx_OpenVINO(sf, settings_str.c_str())); +OrtOpenVINOProviderOptions options; +options.device_type = "CPU_FP32"; +options.enable_vpu_fast_compile = 0; +options.device_id = ""; +options.num_of_threads = 8; +SessionOptionsAppendExecutionProvider_OpenVINO(session_options, &options); ``` ### Available configuration options @@ -67,7 +68,7 @@ SessionOptions::SetGraphOptimizationLevel(ORT_DISABLE_ALL); ``` ### Deprecated: Dynamic device type selection -**Note: This API has been deprecated. Please use the Key-Value mechanism mentioned above to set the 'device-type' option.** +**Note: This API has been deprecated. Please use the mechanism mentioned above to set the 'device-type' option.** When ONNX Runtime is built with OpenVINO Execution Provider, a target hardware option needs to be provided. This build time option becomes the default target harware the EP schedules inference on. However, this target may be overriden at runtime to schedule inference on a different hardware as shown below. Note. This dynamic hardware selection is optional. The EP falls back to the build-time default selection if no dynamic hardware option value is specified. @@ -82,22 +83,12 @@ onnxruntime.capi._pybind_state.set_openvino_device("") ### C/C++ API -Append the settings string "device_type|\n" to the EP settings string. Example shown below for the CPU_FP32 option: +Append the settings string "" to the EP settings string. Example shown below for the CPU_FP32 option: ``` std::string settings_str; ... -settings_str.append("device_type|CPU_FP32\n"); -Ort::ThrowOnError(OrtSessionOptionsAppendExecutionProviderEx_OpenVINO(sf, settings_str.c_str())); -``` - - -### C/C++ API -Append the settings string "device_id|\n" to the EP settings string, where is the unique identifier of the hardware device. -``` -std::string settings_str; -... -settings_str.append("device_id|\n"); -Ort::ThrowOnError(OrtSessionOptionsAppendExecutionProviderEx_OpenVINO(sf, settings_str.c_str())); +settings_str.append("CPU_FP32"); +Ort::ThrowOnError(OrtSessionOptionsAppendExecutionProvider_OpenVINO(sf, settings_str.c_str())); ``` ## ONNX Layers supported using OpenVINO @@ -280,4 +271,4 @@ Improved throughput that multiple devices can deliver (compared to single-device More consistent performance, since the devices can now share the inference burden (so that if one device is becoming too busy, another device can take more of the load) For more information on Multi-Device plugin of OpenVINO, please refer to the following -[documentation](https://docs.openvinotoolkit.org/latest/openvino_docs_IE_DG_supported_plugins_MULTI.html#introducing_multi_device_execution). \ No newline at end of file +[documentation](https://docs.openvinotoolkit.org/latest/openvino_docs_IE_DG_supported_plugins_MULTI.html#introducing_multi_device_execution).