ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
Find a file
Raymond Yang 71ea92355b Add the mac python packaging script (#72)
* Add pipeline for building python wheels for Windows/Linux CPU and GPU

* try enable mkldnn

* remove mklml

* Update python packaging configuration

* Try macos pywheel packaging

* Try removing mkldnn from mac build

* Use conda in mac agents

* Change to build release only

* Add the mac wheel packaging list to the packaging yaml

* Add mkldnn into mac wheels
2018-11-30 17:32:58 -08:00
cmake Merge remote-tracking branch 'origin/master' into scmckay/UpdateCudaInfoInBuildMd 2018-11-29 14:53:11 +10:00
csharp fixed metadata element -- use PackageProjectUrl instead of ProjectUrl (#67) 2018-11-30 16:50:05 -08:00
docs Redo change version number to 0.1.5. 2018-11-30 16:51:50 -08:00
include/onnxruntime/core Document the Graph header files and cleanup some issues. (#42) 2018-11-28 08:42:11 -08:00
onnxruntime Redo change version number to 0.1.5. 2018-11-30 16:51:50 -08:00
package/rpm Redo change version number to 0.1.5. 2018-11-30 16:51:50 -08:00
tools Add the mac python packaging script (#72) 2018-11-30 17:32:58 -08:00
.clang-format Initial bootstrap commit. 2018-11-19 16:48:22 -08:00
.clang-tidy Add remaining build options and make minor changes in documentation (#39) 2018-11-27 19:59:40 -08:00
.gitattributes Initial bootstrap commit. 2018-11-19 16:48:22 -08:00
.gitignore Fix some more C# unit test issues. 2018-11-29 17:38:51 +10:00
.gitmodules Initial bootstrap commit. 2018-11-19 16:48:22 -08:00
build.amd64.1411.bat Initial bootstrap commit. 2018-11-19 16:48:22 -08:00
build.bat Initial bootstrap commit. 2018-11-19 16:48:22 -08:00
BUILD.md Update build.md section on CUDA builds to clarify and add CUDA 10.0 info. 2018-11-29 10:37:21 +10:00
build.sh Initial bootstrap commit. 2018-11-19 16:48:22 -08:00
CODEOWNERS Fix codeowners file 2018-11-27 23:42:17 -08:00
CONTRIBUTING.md Add remaining build options and make minor changes in documentation (#39) 2018-11-27 19:59:40 -08:00
LICENSE Initial bootstrap commit. 2018-11-19 16:48:22 -08:00
README.md Bug bash (#43) 2018-11-27 18:52:50 -08:00
rename_manylinux.sh Bug bash (#43) 2018-11-27 18:52:50 -08:00
requirements-dev.txt Initial bootstrap commit. 2018-11-19 16:48:22 -08:00
requirements.txt Initial bootstrap commit. 2018-11-19 16:48:22 -08:00
setup.py Redo change version number to 0.1.5. 2018-11-30 16:51:50 -08:00
ThirdPartyNotices.txt Initial bootstrap commit. 2018-11-19 16:48:22 -08:00
VERSION_NUMBER Redo change version number to 0.1.5. 2018-11-30 16:51:50 -08:00

ONNX Runtime

Build Status

Introduction

ONNX Runtime is an open-source scoring engine for Open Neural Network Exchange (ONNX) models.

ONNX is an open format for machine learning (ML) models that is supported by various ML and DNN frameworks and tools. This format makes it easier to interoperate between frameworks and to maximize the reach of your hardware optimization investments. Learn more about ONNX on https://onnx.ai or view the Github Repo.

Why use ONNX Runtime

Run any ONNX model

ONNX Runtime provides comprehensive support of the ONNX spec and can be used to run all models based on ONNX v1.2.1 and higher. See ONNX version release details here.

In order to support popular and leading AI models, the runtime stays up-to-date with evolving ONNX operators and functionalities.

Cross Platform

ONNX Runtime offers:

  • APIs for Python, C#, and C (experimental)
  • Available for Linux, Windows, and Mac

See API documentation and package installation instructions below.

High Performance

You can use ONNX Runtime with both CPU and GPU hardware. You can also plug in additional execution providers to ONNX Runtime. With many graph optimizations and various accelerators, ONNX Runtime can often provide lower latency and higher efficiency compared to other runtimes. This provides smoother end-to-end customer experiences and lower costs from improved machine utilization.

Currently ONNX Runtime supports CUDA and MKL-DNN (with option to build with MKL) for computation acceleration, with more coming soon. To add an execution provider, please refer to this page.

Getting Started

If you need a model:

  • Check out the ONNX Model Zoo for ready-to-use pre-trained models.
  • To get an ONNX model by exporting from various frameworks, see ONNX Tutorials.

If you already have an ONNX model, just install the runtime for your machine to try it out. One easy way to deploy the model on the cloud is by using Azure Machine Learning. See detailed instructions here.

Installation

APIs and Official Builds

API Documentation CPU package GPU package
Python Windows
Linux
Mac
Windows
Linux
C# Windows
Linux - Coming Soon
Mac - Coming Soon
Coming Soon
C (experimental) Coming Soon Coming Soon



ONNX Runtime also provides a non ABI C++ API

Build Details

For details on the build configurations and information on how to create a build, see Build ONNX Runtime.

Versioning

See more details on API and ABI Versioning and ONNX Compatibility in Versioning.

Design and Key Features

For an overview of the high level architecture and key decisions in the technical design of ONNX Runtime, see Engineering Design.

ONNX Runtime is built with an extensible design that makes it versatile to support a wide array of models with high performance.

Contribute

We welcome your contributions! Please see the contribution guidelines.

Feedback

For any feedback or to report a bug, please file a GitHub Issue.

Code of Conduct

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.

License

MIT License