mirror of
https://github.com/saymrwulf/onnxruntime.git
synced 2026-05-14 20:48:00 +00:00
* Updates * Remove preview texts * Update README.md * Updates * Update README.md * Update README.md * Minor wording update * Update README.md * Update doc on CUDA version * revert update * Update readme for issue #1558 * Clean up example section * Cosmetic updates - Add a index of build instructions for browsability - Update build CUDA version from 9.1 to 10 * Fix broken link * Update README to reflect upgrade to pip requirement * Update CuDNN version for Linux Python packages * Clean up content Updated ordering and add table of contents * Minor format fixes * Move Android NNAPI under EP section * Add link to operator support documentation * Fix typo * typo fix * remove todo section |
||
|---|---|---|
| .. | ||
| Dockerfile.arm32v7 | ||
| Dockerfile.cuda | ||
| Dockerfile.ngraph | ||
| Dockerfile.openvino | ||
| Dockerfile.server | ||
| Dockerfile.source | ||
| Dockerfile.tensorrt | ||
| install_common_deps.sh | ||
| LICENSE-IMAGE.txt | ||
| README.md | ||
Docker containers for ONNX Runtime
Build from Source
Linux 16.04, CPU, Python Bindings
- Build the docker image from the Dockerfile in this repository.
# If you have a Linux machine, preface this command with "sudo"
docker build -t onnxruntime-source -f Dockerfile.source .
- Run the Docker image
# If you have a Linux machine, preface this command with "sudo"
docker run -it onnxruntime-source
CUDA
Linux 16.04, CUDA 10.0, CuDNN 7
- Build the docker image from the Dockerfile in this repository.
# If you have a Linux machine, preface this command with "sudo"
docker build -t onnxruntime-cuda -f Dockerfile.cuda .
- Run the Docker image
# If you have a Linux machine, preface this command with "sudo"
docker run -it onnxruntime-cuda
nGraph (Public Preview)
Linux 16.04, Python Bindings
- Build the docker image from the Dockerfile in this repository.
# If you have a Linux machine, preface this command with "sudo"
docker build -t onnxruntime-ngraph -f Dockerfile.ngraph .
- Run the Docker image
# If you have a Linux machine, preface this command with "sudo"
docker run -it onnxruntime-ngraph
TensorRT
Linux 16.04, TensorRT 5.0.2
- Build the docker image from the Dockerfile in this repository.
# If you have a Linux machine, preface this command with "sudo"
docker build -t onnxruntime-trt -f Dockerfile.tensorrt .
- Run the Docker image
# If you have a Linux machine, preface this command with "sudo"
docker run -it onnxruntime-trt
OpenVINO (Public Preview)
Linux 16.04, Python Bindings
-
Build the onnxruntime image for all the accelerators supported as below
Retrieve your docker image in one of the following ways.
- For building the docker image, download OpenVINO online installer version 2019 R1.1 from here and copy the openvino tar file in the same directory and build the image. The online installer size is only 16MB and the components needed for the accelerators are mentioned in the dockerfile. Providing the argument device enables onnxruntime for that particular device. You can also provide arguments ONNXRUNTIME_REPO and ONNXRUNTIME_BRANCH to test that particular repo and branch. Default values are http://github.com/microsoft/onnxruntime and repo is master
docker build -t onnxruntime --build-arg DEVICE=$DEVICE . - Pull the official image from DockerHub.
- For building the docker image, download OpenVINO online installer version 2019 R1.1 from here and copy the openvino tar file in the same directory and build the image. The online installer size is only 16MB and the components needed for the accelerators are mentioned in the dockerfile. Providing the argument device enables onnxruntime for that particular device. You can also provide arguments ONNXRUNTIME_REPO and ONNXRUNTIME_BRANCH to test that particular repo and branch. Default values are http://github.com/microsoft/onnxruntime and repo is master
-
DEVICE: Specifies the hardware target for building OpenVINO Execution Provider. Below are the options for different Intel target devices.
Device Option Target Device CPU_FP32|Intel CPUs | GPU_FP32|ntel Integrated Graphics | GPU_FP16|Intel Integrated Graphics | MYRIAD_FP16|Intel MovidiusTM USB sticks | VAD-M_FP16|Intel Vision Accelerator Design based on MovidiusTM MyriadX VPUs |
CPU
-
Retrieve your docker image in one of the following ways.
-
Build the docker image from the DockerFile in this repository.
docker build -t onnxruntime-cpu --build-arg DEVICE=CPU_FP32 --network host . -
Pull the official image from DockerHub.
# Will be available with next release
-
-
Run the docker image
docker run -it onnxruntime-cpu
GPU
-
Retrieve your docker image in one of the following ways.
- Build the docker image from the DockerFile in this repository.
docker build -t onnxruntime-gpu --build-arg DEVICE=GPU_FP32 --network host . - Pull the official image from DockerHub.
# Will be available with next release
- Build the docker image from the DockerFile in this repository.
-
Run the docker image
docker run -it --device /dev/dri:/dev/dri onnxruntime-gpu:latest
Myriad VPU Accelerator
- Retrieve your docker image in one of the following ways.
- Build the docker image from the DockerFile in this repository.
docker build -t onnxruntime-myriad --build-arg DEVICE=MYRIAD_FP16 --network host . - Pull the official image from DockerHub.
# Will be available with next release
- Build the docker image from the DockerFile in this repository.
- Install the Myriad rules drivers on the host machine according to the reference in here
- Run the docker image by mounting the device drivers
docker run -it --network host --privileged -v /dev:/dev onnxruntime-myriad:latest
=======
VAD-M Accelerator Version
- Retrieve your docker image in one of the following ways.
- Build the docker image from the DockerFile in this repository.
docker build -t onnxruntime-vadr --build-arg DEVICE=VAD-M_FP16 --network host . - Pull the official image from DockerHub.
# Will be available with next release
- Build the docker image from the DockerFile in this repository.
- Install the HDDL drivers on the host machine according to the reference in here
- Run the docker image by mounting the device drivers
docker run -it --device --mount type=bind,source=/var/tmp,destination=/var/tmp --device /dev/ion:/dev/ion onnxruntime-hddl:latest
ONNX Runtime Server (Public Preview)
Linux 16.04
- Build the docker image from the Dockerfile in this repository
docker build -t {docker_image_name} -f Dockerfile.server .
- Run the ONNXRuntime server with the image created in step 1
docker run -v {localModelAbsoluteFolder}:{dockerModelAbsoluteFolder} -e MODEL_ABSOLUTE_PATH={dockerModelAbsolutePath} -p {your_local_port}:8001 {imageName}
- Send HTTP requests to the container running ONNX Runtime Server
Send HTTP requests to the docker container through the binding local port. Here is the full usage document.
curl -X POST -d "@request.json" -H "Content-Type: application/json" http://0.0.0.0:{your_local_port}/v1/models/mymodel/versions/3:predict