ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
Find a file
Randy f048fc5fb0 cross compile x86 linux (#562)
* cross compile x86 linux

* fix comments

* install multilib for ubuntu cross compile

* remove tailing slash

* fix -fPIC relocations for x86 target too

* add asm make flag

* fix x86 compile err

* test x86 with zlib and png

* Disable zlib from x86

* install x86 python header

* remove cross-compiling changes

* test 32bit ubuntu

* add x86 ubuntu docker file

* add x86 as arch parametr for docker build

* config pipeline

* avoid dotnet install

* install cmake

* skip dep install

* use latest ubuntu

* install latest cmake

* install x86 deps

* configure cmake

* install ninja

* correct ninja dir

* apt get re2c

* install onnx

* set processor x86

* disable warning

* skip test

* disable test

* disable test

* find lib

* fix typo

* restore test

* disable backend model test

* disable test

* fix test err

* stop installing onnx

* disable onnx test on x86

* restore yml

* mergef with master yml

* cancel needless config setting

* enable x86 flag

* restore all onnx tests

* fix yml typo

* install onnx

* add back x86 flag

* disable cases

* disable case

* disable cases

* add macro to disable cases

* fix typo

* print platform

* remove condition
2019-03-12 09:47:45 -07:00
.github/ISSUE_TEMPLATE Update issue templates (#62) 2018-11-29 15:37:40 -08:00
cmake cross compile x86 linux (#562) 2019-03-12 09:47:45 -07:00
csharp Fix GPu package testing for CAPI (#569) 2019-03-07 14:51:18 -08:00
dockerfiles Docker containers for CPU and GPU quickstart (#332) 2019-01-30 10:58:30 -08:00
docs Fix the broken link. 2019-03-08 15:16:23 -08:00
include/onnxruntime/core Support memory mapping on Linux 2019-03-11 19:39:02 -07:00
onnxruntime cross compile x86 linux (#562) 2019-03-12 09:47:45 -07:00
package/rpm bump up version to 0.3.0 (#536) 2019-03-04 13:41:53 -08:00
tools cross compile x86 linux (#562) 2019-03-12 09:47:45 -07:00
.clang-format Initial bootstrap commit. 2018-11-19 16:48:22 -08:00
.clang-tidy Add remaining build options and make minor changes in documentation (#39) 2018-11-27 19:59:40 -08:00
.gitattributes Initial bootstrap commit. 2018-11-19 16:48:22 -08:00
.gitignore Use Eigen ThreadPool in OnnxRuntime (#323) 2019-01-15 15:19:30 -08:00
.gitmodules Implement tokenex regular expression matching and add tests. (#480) 2019-02-20 15:56:32 -08:00
build.amd64.1411.bat Initial bootstrap commit. 2018-11-19 16:48:22 -08:00
build.bat Initial bootstrap commit. 2018-11-19 16:48:22 -08:00
BUILD.md Support Windows cross-compiling for ARM(64) in ORT build scripts (#549) 2019-03-08 17:42:20 -08:00
build.sh update 2019-01-09 15:49:27 -08:00
cgmanifest.json Fix json error (#557) 2019-03-06 14:53:07 -08:00
CODEOWNERS Fix codeowners file 2018-11-27 23:42:17 -08:00
CONTRIBUTING.md Miscellaneous fixes (#123) 2018-12-06 22:21:04 -08:00
LICENSE Initial bootstrap commit. 2018-11-19 16:48:22 -08:00
README.md Readme updates (#570) 2019-03-07 16:19:11 -08:00
rename_manylinux.sh Bug bash (#43) 2018-11-27 18:52:50 -08:00
requirements-dev.txt Initial bootstrap commit. 2018-11-19 16:48:22 -08:00
requirements-doc.txt Update the documentation, run all examples during the generation of the documentation (replace #89) (#103) 2018-12-05 10:12:25 -08:00
requirements.txt Initial bootstrap commit. 2018-11-19 16:48:22 -08:00
setup.py Clarify numpy version requirement (#537) 2019-03-05 11:07:28 -08:00
ThirdPartyNotices.txt Update cgmanifest and TPN (#529) 2019-03-05 17:22:39 -08:00
VERSION_NUMBER bump up version to 0.3.0 (#536) 2019-03-04 13:41:53 -08:00

Build Status Build Status Build Status Build Status Build Status

ONNX Runtime is an open-source scoring engine for Open Neural Network Exchange (ONNX) models.

ONNX is an open format for machine learning (ML) models that is supported by various ML and DNN frameworks and tools. This format makes it easier to interoperate between frameworks and to maximize the reach of your hardware optimization investments. Learn more about ONNX on https://onnx.ai or view the Github Repo.

Why use ONNX Runtime

ONNX Runtime is an open architecture that is continually evolving to adapt to and address the newest developments and challenges in AI and Deep Learning. We will keep ONNX Runtime up to date with the ONNX standard, supporting all ONNX releases with future compatibliity while maintaining backwards compatibility with prior releases.

ONNX Runtime continuously strives to provide top performance for a broad and growing number of usage scenarios in Machine Learning. Our investments focus on these 3 core areas:

  1. Run any ONNX model
  2. High performance
  3. Cross platform

Run any ONNX model

Alignment with ONNX Releases

ONNX Runtime provides comprehensive support of the ONNX spec and can be used to run all models based on ONNX v1.2.1 and higher. See ONNX version release details here.

As of March 2019, ONNX Runtime supports ONNX 1.4.

Traditional ML support

ONNX Runtime fully supports the ONNX-ML profile of the ONNX spec for traditional ML scenarios.

High Performance

You can use ONNX Runtime with both CPU and GPU hardware. You can also plug in additional execution providers to ONNX Runtime. With many graph optimizations and various accelerators, ONNX Runtime can often provide lower latency and higher efficiency compared to other runtimes. This provides smoother end-to-end customer experiences and lower costs from improved machine utilization.

Currently ONNX Runtime supports CUDA, MLAS (Microsoft Linear Algebra Subprograms), MKL-DNN, and MKL-ML for computation acceleration. See more details on available build options here or refer to this page to add a new execution provider.

We are continuously working to integrate new execution providers to provide improvements in latency and efficiency. We have ongoing collaborations to integrate the following with ONNX Runtime:

  • Intel MKL-DNN and nGraph
  • NVIDIA TensorRT

Cross Platform

ONNX Runtime offers:

  • APIs for Python, C#, and C
  • Available for Linux, Windows, and Mac

See API documentation and package installation instructions below.

Looking ahead: To broaden the reach of the runtime, we will continue investments to make ONNX Runtime available and compatible with more platforms. If you have specific scenarios that are not currently supported, please share your suggestions via Github Issues.

Getting Started

If you need a model:

  • Check out the ONNX Model Zoo for ready-to-use pre-trained models.
  • To get an ONNX model by exporting from various frameworks, see ONNX Tutorials.

If you already have an ONNX model, just install the runtime for your machine to try it out. One easy way to deploy the model on the cloud is by using Azure Machine Learning. See detailed instructions and sample notebooks.

Installation

APIs and Official Builds

API Documentation CPU package GPU package*
Python** Available on Pypi
  • Windows: x64
  • Linux: x64
  • Mac OS X: x64

Available on Pypi
  • Windows: x64
  • Linux: x64


C# Available on Nuget
  • Windows: x64
  • Linux: x64
  • Mac OS X: x64
Available on Nuget
  • Windows: x64
  • Linux: x64

C Available on Nuget
  • Windows: x64
  • Linux: x64
  • Mac OS X: x64

Files (.zip, .tgz)
  • Windows: x64, x86
  • Linux: x64, x86
  • Mac OS X: x64
Available on Nuget
  • Windows: x64
  • Linux: x64


Files (.zip, .tgz)
  • Windows: x64
  • Linux: x64

C++ Build from source Build from source

*Requires CUDA 9.1 and cuDNN 7.3
**Compatible with Python 3.5-3.7

System Requirements

  • ONNX Runtime binaries in CPU packages use OpenMP and depends on the library being available at runtime in the system.
    • For Windows, OpenMP support comes as part of VC runtime. It is also available as redist packages: vc_redist.x64.exe and vc_redist.x86.exe
    • For Linux, the system must have the libgomp.so.1 which can be installed using apt-get install libgomp1.
  • The GPU builds require the CUDA9.1 and cuDNN 7.3 runtime libraries being installed in the system.

Build Details

For details on the build configurations and information on how to create a build, see Build ONNX Runtime.

Versioning

See more details on API and ABI Versioning and ONNX Compatibility in Versioning.

Design and Key Features

For an overview of the high level architecture and key decisions in the technical design of ONNX Runtime, see Engineering Design.

ONNX Runtime is built with an extensible design that makes it versatile to support a wide array of models with high performance.

Contribute

We welcome your contributions! Please see the contribution guidelines.

Feedback

For any feedback or to report a bug, please file a GitHub Issue.

Code of Conduct

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.

License

MIT License