2021-10-02 00:28:26 +00:00
# Dockerfiles
**Execution Providers**
- CPU: [Dockerfile ](Dockerfile.source ), [Instructions ](#cpu )
- CUDA/cuDNN: [Dockerfile ](Dockerfile.cuda ), [Instructions ](#cuda )
2020-06-26 02:22:57 +00:00
- MIGraphX: [Dockerfile ](Dockerfile.migraphx ), [Instructions ](#migraphx )
2022-09-15 02:41:49 +00:00
- ROCm: [Dockerfile ](Dockerfile.rocm ), [Instructions ](#rocm )
2021-10-02 00:28:26 +00:00
- OpenVINO: [Dockerfile ](Dockerfile.openvino ), [Instructions ](#openvino )
- TensorRT: [Dockerfile ](Dockerfile.tensorrt ), [Instructions ](#tensorrt )
- VitisAI: [Dockerfile ](Dockerfile.vitisai )
- NVIDIA Jetson TX1/TX2/Nano/Xavier: [Dockerfile ](Dockerfile.jetson ), [Instructions ](#nvidia-jetson-tx1tx2nanoxavier )
2019-10-15 22:58:02 +00:00
2021-10-02 00:28:26 +00:00
**Other**
- ORT Training (torch-ort): [Dockerfiles ](https://github.com/pytorch/ort/tree/main/docker )
- ONNX-Ecosystem (CPU + Converters): [Dockerfile ](https://github.com/onnx/onnx-docker/blob/master/onnx-ecosystem/Dockerfile ), [Instructions ](https://github.com/onnx/onnx-docker/tree/master/onnx-ecosystem )
2019-09-13 21:16:47 +00:00
2019-10-15 22:58:02 +00:00
2021-10-02 00:28:26 +00:00
# Instructions
2019-10-15 22:58:02 +00:00
## CPU
2023-09-06 01:12:10 +00:00
**Mariner 2.0, CPU, Python Bindings**
2019-07-09 09:03:55 +00:00
2021-09-08 19:22:11 +00:00
2023-09-06 01:12:10 +00:00
1. Build the docker image from the Dockerfile in this repository.
```bash
2020-11-25 23:38:22 +00:00
docker build -t onnxruntime-source -f Dockerfile.source ..
2019-07-09 09:03:55 +00:00
```
2023-09-06 01:12:10 +00:00
2. Run the Docker image
2019-07-09 09:03:55 +00:00
2023-09-06 01:12:10 +00:00
```bash
2019-07-09 09:03:55 +00:00
docker run -it onnxruntime-source
```
2024-07-22 20:37:32 +00:00
The docker file supports both x86_64 and ARM64(aarch64). You may use docker's "--platform" parameter to explicitly specify which CPU architecture you want to build. For example:
2019-07-09 09:03:55 +00:00
2023-09-06 01:12:10 +00:00
```bash
docker build --platform linux/arm64/v8 -f Dockerfile.source
2021-09-08 19:22:11 +00:00
```
2023-09-06 01:12:10 +00:00
However, we cannot build the code for 32-bit ARM in such a way since a 32-bit compiler/linker might not have enough memory to generate the binaries.
2021-09-08 19:22:11 +00:00
2023-09-06 01:12:10 +00:00
## CUDA
**Ubuntu 22.04, CUDA 12.1, CuDNN 8**
1. Build the docker image from the Dockerfile in this repository.
2019-07-09 09:03:55 +00:00
```
2020-11-25 23:38:22 +00:00
docker build -t onnxruntime-cuda -f Dockerfile.cuda ..
2019-07-09 09:03:55 +00:00
```
2023-09-06 01:12:10 +00:00
2. Run the Docker image
2019-07-09 09:03:55 +00:00
```
2020-09-08 16:59:16 +00:00
docker run --gpus all -it onnxruntime-cuda
or
nvidia-docker run -it onnxruntime-cuda
2019-07-09 09:03:55 +00:00
```
2019-01-30 18:58:30 +00:00
2019-08-28 04:31:19 +00:00
## TensorRT
2023-03-01 15:02:42 +00:00
**Ubuntu 20.04, CUDA 11.8, TensorRT 8.5.1**
2019-05-09 01:24:16 +00:00
2021-09-08 19:22:11 +00:00
1. Update submodules
```
git submodule update --init
```
2. Build the docker image from the Dockerfile in this repository.
2019-05-09 01:24:16 +00:00
```
2019-07-09 09:03:55 +00:00
docker build -t onnxruntime-trt -f Dockerfile.tensorrt .
2019-05-09 01:24:16 +00:00
```
2021-09-08 19:22:11 +00:00
3. Run the Docker image
2019-07-09 09:03:55 +00:00
2019-05-09 01:24:16 +00:00
```
2023-03-01 15:02:42 +00:00
docker run --gpus all -it onnxruntime-trt
or
nvidia-docker run -it onnxruntime-trt
2019-06-18 15:58:53 +00:00
```
2019-10-15 22:58:02 +00:00
## OpenVINO
*Public Preview*
2023-01-12 00:31:26 +00:00
**Ubuntu 20.04, Python & C# Bindings**
**RHEL 8.4, Python Binding**
2019-06-18 15:58:53 +00:00
2021-08-02 22:13:46 +00:00
### **1. Using pre-built container images for Python API**
2020-12-21 19:48:54 +00:00
2023-02-23 18:48:04 +00:00
The unified container image from [Dockerhub ](https://hub.docker.com/repository/docker/openvino/onnxruntime_ep_ubuntu20 ) can be used to run an application on any of the target accelerators. In order to select the target accelerator, the application should explicitly specify the choice using the `device_type` configuration option for OpenVINO Execution provider. Refer to [OpenVINO EP runtime configuration documentation ](https://onnxruntime.ai/docs/execution-providers/OpenVINO-ExecutionProvider.html#configuration-options ) for details on specifying this option in the application code.
2021-08-02 22:13:46 +00:00
If the `device_type` runtime config option is not explicitly specified, CPU will be chosen as the hardware target execution.
2020-12-21 19:48:54 +00:00
### **2. Building from Dockerfile**
2019-10-02 21:04:04 +00:00
1. Build the onnxruntime image for one of the accelerators supported below.
2019-07-17 21:52:59 +00:00
Retrieve your docker image in one of the following ways.
2023-01-12 00:31:26 +00:00
- Choose Dockerfile.openvino for Python API or Dockerfile.openvino-csharp for C# API as < Dockerfile > for building latest OpenVINO based Docker image for Ubuntu20.04 and Dockerfile.openvino-rhel for Python API for RHEL 8.4. Providing the docker build argument DEVICE enables the onnxruntime build for that particular device. You can also provide arguments ONNXRUNTIME_REPO and ONNXRUNTIME_BRANCH to test that particular repo and branch. Default repository is http://github.com/microsoft/onnxruntime and default branch is main.
2019-07-17 21:52:59 +00:00
```
2020-10-30 18:27:15 +00:00
docker build --rm -t onnxruntime --build-arg DEVICE=$DEVICE -f < Dockerfile > .
2019-07-17 21:52:59 +00:00
```
- Pull the official image from DockerHub.
2019-09-13 21:16:47 +00:00
2019-07-17 21:52:59 +00:00
2. DEVICE: Specifies the hardware target for building OpenVINO Execution Provider. Below are the options for different Intel target devices.
2020-11-11 23:16:30 +00:00
| Device Option | Target Device |
| --------- | -------- |
| < code > CPU_FP32< / code > | Intel< sup > < / sup > CPUs |
2023-01-12 00:31:26 +00:00
| < code > CPU_FP16< / code > | Intel< sup > < / sup > CPUs |
2020-11-11 23:16:30 +00:00
| < code > GPU_FP32< / code > |Intel< sup > < / sup > Integrated Graphics |
| < code > GPU_FP16< / code > | Intel< sup > < / sup > Integrated Graphics |
| < code > MYRIAD_FP16< / code > | Intel< sup > < / sup > Movidius< sup > TM< / sup > USB sticks |
| < code > VAD-M_FP16< / code > | Intel< sup > < / sup > Vision Accelerator Design based on Movidius< sup > TM< / sup > MyriadX VPUs |
| < code > HETERO:< DEVICE_TYPE_1 > ,< DEVICE_TYPE_2 > ,< DEVICE_TYPE_3 > ...< / code > | All Intel< sup > ®< / sup > silicons mentioned above |
2021-11-15 21:41:12 +00:00
| < code > MULTI:< DEVICE_TYPE_1 > ,< DEVICE_TYPE_2 > ,< DEVICE_TYPE_3 > ...< / code > | All Intel< sup > ®< / sup > silicons mentioned above |
| < code > AUTO:< DEVICE_TYPE_1 > ,< DEVICE_TYPE_2 > ,< DEVICE_TYPE_3 > ...< / code > | All Intel< sup > ®< / sup > silicons mentioned above |
2020-11-11 23:16:30 +00:00
2021-11-15 21:41:12 +00:00
Specifying Hardware Target for HETERO or MULTI or AUTO Build:
2020-11-11 23:16:30 +00:00
HETERO:< DEVICE_TYPE_1 > ,< DEVICE_TYPE_2 > ..
MULTI:< DEVICE_TYPE_1 > ,< DEVICE_TYPE_2 > ..
2021-11-15 21:41:12 +00:00
AUTO:< DEVICE_TYPE_1 > ,< DEVICE_TYPE_2 > ..
2021-08-02 22:13:46 +00:00
The < DEVICE_TYPE > can be any of these devices from this list ['CPU','GPU','MYRIAD','HDDL']
2020-11-11 23:16:30 +00:00
2021-11-15 21:41:12 +00:00
A minimum of two DEVICE_TYPE'S should be specified for a valid HETERO or MULTI or AUTO Build.
2020-11-11 23:16:30 +00:00
Example:
2021-11-15 21:41:12 +00:00
HETERO:MYRIAD,CPU HETERO:HDDL,GPU,CPU MULTI:MYRIAD,GPU,CPU AUTO:GPU,CPU
2020-11-11 23:16:30 +00:00
2022-08-22 17:48:12 +00:00
*This is the hardware accelerator target that is enabled by **default** in the container image. After building the container image for one default target, the application may explicitly choose a different target at run time with the same container by using the [Dynamic device selction API ](https://github.com/microsoft/onnxruntime/blob/main/docs/execution_providers/OpenVINO-ExecutionProvider.md#dynamic-device-selection ).*
2020-07-02 05:16:55 +00:00
2019-07-17 21:52:59 +00:00
2019-10-15 22:58:02 +00:00
### OpenVINO on CPU
2019-07-17 21:52:59 +00:00
2020-04-28 16:08:15 +00:00
1. Build the docker image from the DockerFile in this repository.
2019-07-17 21:52:59 +00:00
```
2021-01-22 17:43:47 +00:00
docker build --rm -t onnxruntime-cpu --build-arg DEVICE=CPU_FP32 -f < Dockerfile > .
2019-07-17 21:52:59 +00:00
```
2. Run the docker image
```
2021-01-22 17:43:47 +00:00
docker run -it --rm --device-cgroup-rule='c 189:* rmw' -v /dev/bus/usb:/dev/bus/usb onnxruntime-cpu:latest
2019-07-17 21:52:59 +00:00
```
2019-10-15 22:58:02 +00:00
### OpenVINO on GPU
2019-07-17 21:52:59 +00:00
2020-04-28 16:08:15 +00:00
1. Build the docker image from the DockerFile in this repository.
2019-09-13 21:16:47 +00:00
```
2021-01-22 17:43:47 +00:00
docker build --rm -t onnxruntime-gpu --build-arg DEVICE=GPU_FP32 -f < Dockerfile > .
2019-07-17 21:52:59 +00:00
```
2. Run the docker image
```
2021-04-01 18:28:54 +00:00
docker run -it --rm --device-cgroup-rule='c 189:* rmw' -v /dev/bus/usb:/dev/bus/usb --device /dev/dri:/dev/dri onnxruntime-gpu:latest
2019-07-17 21:52:59 +00:00
```
OpenVINO-EP v4.0 Release PR with OpenVINO 2022.1 (#11025)
* Enabling ov-ep for 2022.1 Release
->Added ov-ep 2022.1 flow
->Validated CPU Unit tests with OV
Master using onnxruntime_test_all unit
tests.
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Fix for output mismatch b/w OpenVINO and ONNX
Refer:
https://jira.devtools.intel.com/browse/CVS-60310
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Enabling Adobe ops
->Enable Resize op for iGPU
->Enable Add op for iGPU
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Removing irrelevant conditions
->Removing some conditions from
GetCapability() which are now not
required. (Removed conditions for
OV version support less than 2021.2)
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Enable upsample op
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Enable Adobe proxy-e model
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Removing any extra conditions for Opset13 ops
* Opset13 changes
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Exception handling for devices
* Added comments
* Implement GPU Throttling feature
*Added GPU Throttling feature for iGPU's.
when user enables it as a runtime option,
it helps in reducing overall CPU usage
of the application
*Added changes to exercise this option
using onnxruntime_perf_test application.
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Renaming the runtime config option
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Added the user to video and users group
* Handling_GPU.0_GPU.1
* Handling special conditions
->Handling corner cases for
device_type checks
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Modification to include new api 2.0 changes in the code
* Added opset13 changes
->Enabled Few ops
->Added Debug info for case 3b in getcapability()
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Enabling ov-ep for 2022.1 Release
->Added ov-ep 2022.1 flow
->Validated CPU Unit tests with OV
Master using onnxruntime_test_all unit
tests.
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Fix for output mismatch b/w OpenVINO and ONNX
Refer:
https://jira.devtools.intel.com/browse/CVS-60310
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Enabling Adobe ops
->Enable Resize op for iGPU
->Enable Add op for iGPU
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Removing irrelevant conditions
->Removing some conditions from
GetCapability() which are now not
required. (Removed conditions for
OV version support less than 2021.2)
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Enable upsample op
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Enable Adobe proxy-e model
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Removing any extra conditions for Opset13 ops
* Opset13 changes
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Exception handling for devices
* Added comments
* Implement GPU Throttling feature
*Added GPU Throttling feature for iGPU's.
when user enables it as a runtime option,
it helps in reducing overall CPU usage
of the application
*Added changes to exercise this option
using onnxruntime_perf_test application.
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Renaming the runtime config option
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Added the user to video and users group
* Handling_GPU.0_GPU.1
* Handling special conditions
->Handling corner cases for
device_type checks
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Added opset13 changes
->Enabled Few ops
->Added Debug info for case 3b in getcapability()
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Log comments updated
* Changes to enable 2.0 api
* Enabling ov-ep for 2022.1 Release
->Added ov-ep 2022.1 flow
->Validated CPU Unit tests with OV
Master using onnxruntime_test_all unit
tests.
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Fix for output mismatch b/w OpenVINO and ONNX
Refer:
https://jira.devtools.intel.com/browse/CVS-60310
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Enabling Adobe ops
->Enable Resize op for iGPU
->Enable Add op for iGPU
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Removing irrelevant conditions
->Removing some conditions from
GetCapability() which are now not
required. (Removed conditions for
OV version support less than 2021.2)
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Enable upsample op
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Enable Adobe proxy-e model
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Removing any extra conditions for Opset13 ops
* Opset13 changes
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Exception handling for devices
* Added comments
* Implement GPU Throttling feature
*Added GPU Throttling feature for iGPU's.
when user enables it as a runtime option,
it helps in reducing overall CPU usage
of the application
*Added changes to exercise this option
using onnxruntime_perf_test application.
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Renaming the runtime config option
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Added the user to video and users group
* Handling_GPU.0_GPU.1
* Handling special conditions
->Handling corner cases for
device_type checks
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Added opset13 changes
->Enabled Few ops
->Added Debug info for case 3b in getcapability()
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Fix build issue
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Fixes issues
*Fixes compiler warnings c4458 on windows.
*Fixes the bug in device_type check logic
*Adds print info for enable_opencl_throttling
option in onnxruntime_perf_test
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* commit to make openvino_2021.4 compatible
* Fixed IO Buffer Optimization
* Fix output names issue
* Fix 2021.3 branch
* Bug Fix for Multiple inputs/outputs
- Assigns the right output_name and
input_name for the graph when
returned by CompiledModel::inputs()
OV function.
- Also takex care of output mismatch
issue b/w openvino output and onnx
output
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Add comments for the changes made
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* IO Buffer Changes
* Commit for Disabling GPU Throttling for 2021.4
* Updated branch
* Fix windows build
->Fixed windows build in debug mode
->Disabled scatternd3_tensor_int64
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Fixed CPP Unit tests for CPU
-Fixed shrink, MVN, ReduceL2, Maxpool,
upsample, scatter, slice, reshape,
unsqueeze.
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Fixed first set of GPU Tests
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Fixed additional failing tests on GPU
->Added conditions to disable certain ops
under certain conditions
->Disabled certain tests
->Added some op supports for no_dimension
supported
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Added Expand op support for CPU
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Added condition for squeeze op
->Shape can't have empty axes attribute
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Add support for LessOrEqual op function
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* OV Interface wait for replaced by indefinite wait call
* use names from ONNX model to access OV tensors
This chnage is to use the input/output names
retrieved from original onnx model to access
OV tensors and to check if there's any input
or output names mismatch b/w ONNX naming
and OV naming.
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Fixes Myriad unit tests and other issues
->Fixes Myriad CPP unit tests
->Fixes output mismatch issue with models with
sub graph partitioning
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Fix segfault issue
->Fixed case 3b condition in get_capability()
which was causing the segfault issue
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Fixed build isuse with ov 2021.4 with I/O buffer
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Disables performance counters for I/O Buffer
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Fixed inputs/outputs mismatch for HDDL with 2022.1
Signed-off-by: Mohammad Amir Aqeel <mohammadx.amir.aqeel@intel.com>
* Fix to enable GPU FP16
* Enabled mlperf_ssd_mobilenet_300 model fully on CPU
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Added ov version specific dll packaging for nuget
* Fixed conditions for few ops
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Dockerfile updates
* Updated License Info
-Updated the copyrights License Info
-modified FP16 transformations with OV 2022.1
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Disabling mlperf_ssd_mobilenet_300 model
->Disabled this model for openvino. The
test is failing in Internal_CI pipelines.
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Disabling failing python CPU Tests
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Fixed flake8 python errors
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
Co-authored-by: hdgx <harinix.d.g@intel.com>
Co-authored-by: mayavijx <mayax.vijayan@intel.com>
Co-authored-by: sfatimar <sahar.fatima@intel.com>
Co-authored-by: mohsinmx <mohsinx.mohammad@intel.com>
Co-authored-by: Mohammad Amir Aqeel <mohammadx.amir.aqeel@intel.com>
2022-04-06 20:30:33 +00:00
If your host system is Ubuntu 20, use the below command to run. Please find the alternative steps [here ](https://github.com/openvinotoolkit/docker_ci/blob/master/configure_gpu_ubuntu20.md ).
```
docker run -it --rm --device-cgroup-rule='c 189:* rmw' -v /dev/bus/usb:/dev/bus/usb --device /dev/dri:/dev/dri --group-add=$(stat -c "%g" /dev/dri/render*) onnxruntime-gpu:latest
```
2019-10-15 22:58:02 +00:00
### OpenVINO on Myriad VPU Accelerator
2019-07-17 21:52:59 +00:00
2020-04-28 16:08:15 +00:00
1. Build the docker image from the DockerFile in this repository.
2019-09-13 21:16:47 +00:00
```
2021-01-22 17:43:47 +00:00
docker build --rm -t onnxruntime-myriad --build-arg DEVICE=MYRIAD_FP16 -f < Dockerfile > .
2019-07-17 21:52:59 +00:00
```
2. Install the Myriad rules drivers on the host machine according to the reference in [here ](https://docs.openvinotoolkit.org/latest/_docs_install_guides_installing_openvino_linux.html#additional-NCS-steps )
2020-04-28 16:08:15 +00:00
2019-07-17 21:52:59 +00:00
3. Run the docker image by mounting the device drivers
```
2021-01-22 17:43:47 +00:00
docker run -it --rm --device-cgroup-rule='c 189:* rmw' -v /dev/bus/usb:/dev/bus/usb onnxruntime-myriad:latest
2019-07-17 21:52:59 +00:00
```
2019-10-15 22:58:02 +00:00
### OpenVINO on VAD-M Accelerator Version
2019-07-17 21:52:59 +00:00
2023-01-12 00:31:26 +00:00
1. Download OpenVINO **Full package** for latest version for Linux on host machine from [this link ](https://software.intel.com/en-us/openvino-toolkit/choose-download ) and install it with the help of instructions from [this link ](https://docs.openvinotoolkit.org/latest/_docs_install_guides_installing_openvino_linux.html )
2020-06-02 09:42:58 +00:00
2. Install the drivers on the host machine according to the reference in [here ](https://docs.openvinotoolkit.org/latest/_docs_install_guides_installing_openvino_linux_ivad_vpu.html )
3. Build the docker image from the DockerFile in this repository.
2019-09-13 21:16:47 +00:00
```
2021-04-01 18:28:54 +00:00
docker build --rm -t onnxruntime-vadm --build-arg DEVICE=VAD-M_FP16 -f < Dockerfile > .
2019-07-17 21:52:59 +00:00
```
2021-08-02 22:13:46 +00:00
4. Run hddldaemon on the host in a separate terminal session using the following steps:
2021-04-01 18:28:54 +00:00
- Initialize the OpenVINO environment.
```
2023-01-12 00:31:26 +00:00
source < openvino_install_directory > /setupvars.sh
2021-04-01 18:28:54 +00:00
```
- Edit the hddl_service.config file from $HDDL_INSTALL_DIR/config/hddl_service.config and change the field “bypass_device_number” to 8.
- Restart the hddl daemon for the changes to take effect.
2020-06-02 09:42:58 +00:00
```
$HDDL_INSTALL_DIR/bin/hddldaemon
```
2021-04-01 18:28:54 +00:00
- Note that if OpenVINO was installed with root permissions, this file has to be changed with the same permissions.
2020-06-02 09:42:58 +00:00
5. Run the docker image by mounting the device drivers
2019-07-17 21:52:59 +00:00
```
2021-08-02 22:13:46 +00:00
docker run -itu root:root --rm --device-cgroup-rule='c 189:* rmw' -v /dev/bus/usb:/dev/bus/usb --mount type=bind,source=/var/tmp,destination=/var/tmp --device /dev/ion:/dev/ion onnxruntime-vadm:latest
2019-07-17 21:52:59 +00:00
```
2020-11-11 23:16:30 +00:00
### OpenVINO on HETERO or Multi-Device Build
1. Build the docker image from the DockerFile in this repository.
for HETERO:
```
2021-04-01 18:28:54 +00:00
docker build --rm -t onnxruntime-HETERO --build-arg DEVICE=HETERO:< DEVICE_TYPE_1 > ,< DEVICE_TYPE_2 > ,< DEVICE_TYPE_3 > ... -f < Dockerfile > .
2020-11-11 23:16:30 +00:00
```
for MULTI:
```
2021-04-01 18:28:54 +00:00
docker build --rm -t onnxruntime-MULTI --build-arg DEVICE=MULTI:< DEVICE_TYPE_1 > ,< DEVICE_TYPE_2 > ,< DEVICE_TYPE_3 > ... -f < Dockerfile > .
2020-11-11 23:16:30 +00:00
```
2021-11-15 21:41:12 +00:00
for AUTO:
```
docker build --rm -t onnxruntime-AUTO --build-arg DEVICE=AUTO:< DEVICE_TYPE_1 > ,< DEVICE_TYPE_2 > ,< DEVICE_TYPE_3 > ... -f < Dockerfile > .
```
2. Install the required rules, drivers and other packages as required from the steps above for each of the DEVICE_TYPE accordingly that would be added for the HETERO or MULTI or AUTO device build type.
2020-11-11 23:16:30 +00:00
3. Run the docker image as mentioned in the above steps
2021-08-11 22:25:04 +00:00
## ARM 32/64
2019-10-15 22:58:02 +00:00
2021-08-11 22:25:04 +00:00
The build instructions are similar to x86 CPU. But if you want to build them on a x86 machine, you need to install qemu-user-static system package (outside of docker instances) first. Then
2022-08-16 21:13:05 +00:00
2021-09-08 19:22:11 +00:00
1. Update submodules
```
git submodule update --init
```
2. Build the docker image from the Dockerfile in this repository.
2021-08-11 22:25:04 +00:00
```bash
docker build -t onnxruntime-source -f Dockerfile.arm64 ..
```
2019-10-15 22:58:02 +00:00
2021-09-08 19:22:11 +00:00
3. Run the Docker image
2019-10-15 22:58:02 +00:00
2021-08-11 22:25:04 +00:00
```bash
docker run -it onnxruntime-source
```
2019-10-15 22:58:02 +00:00
2021-08-11 22:25:04 +00:00
For ARM32, please use Dockerfile.arm32v7 instead of Dockerfile.arm64.
2022-08-16 21:13:05 +00:00
2020-08-01 06:49:23 +00:00
## NVIDIA Jetson TX1/TX2/Nano/Xavier:
These instructions are for [JetPack SDK 4.4 ](https://developer.nvidia.com/embedded/jetpack ).
The Dockerfile.jetson is using [NVIDIA L4T 32.4.3 ](https://developer.nvidia.com/embedded/linux-tegra ) as base image.
Versions different from these may require modifications to these instructions.
Instructions assume you are on Jetson host in the root of onnxruntime git project clone(`https://github.com/microsoft/onnxruntime`)
Two-step installation is required:
2021-03-06 00:29:04 +00:00
1. Build Python 'wheel' for ONNX Runtime on host Jetson system; Pre-built Python wheels are also available at [Nvidia Jetson Zoo ](https://elinux.org/Jetson_Zoo#ONNX_Runtime ).
2020-08-01 06:49:23 +00:00
2. Build Docker image using ONNX Runtime wheel from step 1. You can also install the wheel on the host directly.
Here are the build commands for each step:
1.1 Install ONNX Runtime build dependencies on Jetpack 4.4 host:
```
sudo apt install -y --no-install-recommends \
build-essential software-properties-common cmake libopenblas-dev \
libpython3.6-dev python3-pip python3-dev
```
1.2 Build ONNXRuntime Python wheel:
```
./build.sh --update --config Release --build --build_wheel \
--use_cuda --cuda_home /usr/local/cuda --cudnn_home /usr/lib/aarch64-linux-gnu
```
Note: You may add --use_tensorrt and --tensorrt_home options if you wish to use NVIDIA TensorRT (support is experimental), as well as any other options supported by [build.sh script ](build.sh ).
2. After the Python wheel is successfully built, use 'find' command for Docker to install the wheel inside new image:
```
find . -name '*.whl' -print -exec sudo -H DOCKER_BUILDKIT=1 nvidia-docker build --build-arg WHEEL_FILE={} -f ./dockerfiles/Dockerfile.jetson . \;
```
Note: Resulting Docker image will have ONNX Runtime installed in /usr, and ONNX Runtime wheel copied to /onnxruntime directory.
Nothing else from ONNX Runtime source tree will be copied/installed to the image.
2024-07-22 20:37:32 +00:00
Note: When running the container you built in Docker, please either use 'nvidia-docker' command instead of 'docker', or use Docker command-line options to make sure NVIDIA runtime will be used and appropriate files mounted from host. Otherwise, CUDA libraries won't be found. You can also [set NVIDIA runtime as default in Docker ](https://github.com/dusty-nv/jetson-containers#docker-default-runtime ).
2019-10-15 22:58:02 +00:00
2022-08-16 21:13:05 +00:00
## MIGraphX
2024-02-29 09:51:29 +00:00
**Ubuntu 20.04, ROCm6.0, MIGraphX**
2020-06-26 02:22:57 +00:00
1. Build the docker image from the Dockerfile in this repository.
```
docker build -t onnxruntime-migraphx -f Dockerfile.migraphx .
```
2. Run the Docker image
```
docker run -it --device=/dev/kfd --device=/dev/dri --group-add video onnxruntime-migraphx
```
2022-09-15 02:41:49 +00:00
## ROCm
2024-02-29 09:51:29 +00:00
**Ubuntu 20.04, ROCm6.0**
2022-09-15 02:41:49 +00:00
1. Build the docker image from the Dockerfile in this repository.
```
docker build -t onnxruntime-rocm -f Dockerfile.rocm .
```
2. Run the Docker image
```
2022-12-22 02:03:34 +00:00
docker run -it --device=/dev/kfd --device=/dev/dri --group-add video onnxruntime-rocm
2022-09-15 02:41:49 +00:00
```