* link to folder instead of READMEs inside folder (#3938) otherwise hard to find the source code * [Node.js binding] fix linux build (#3927) * [Node.js binding] add build flag for node.js binding (#3948) * [Nodejs binding] create a new pipeline to generate signed binaries (#4104) * add yml files * update pipeline * fix yaml syntax * yaml pop BuildCSharp * udpate yaml * do not stage codesign summary * fix build: pipeline Node.js version to 12.16.3 (#4145) * [Node.js binding] upgrade node-addon-api to 3.0 (#4148) * [Node.js binding] add linux and mac package (#4157) * try mac pipeline * fix path separator * copy prebuilds folder * split esrp yaml for win/mac * disable mac signing temporarily * add linux * fix indent * add nodetool in linux * add nodetool in win-ci-2019 * replace linux build by custom docker scripts * use manylinux as node 12.16 not working on centos6 * try ubuntu * loosen timeout for test case - multiple runs calls * add script to support update nodejs binding version (#4164) * [java] Adds a CUDA test (#3956) * [java] - adding a cuda enabled test. * Adding --build_java to the windows gpu ci pipeline. * Removing a stray line from the unit tests that always enabled CUDA for Java. * Update OnnxRuntime.java for OS X environment. (#3985) onnxruntime init failure due to wrong path of reading native libraries. In OS X 64 system, the arch name is detected as x86 which generates invalid path to read native libraries. Exception java.lang.UnsatisfiedLinkError: no onnxruntime in java.library.path at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1867) at java.lang.Runtime.loadLibrary0(Runtime.java:870) at java.lang.System.loadLibrary(System.java:1122) at ai.onnxruntime.OnnxRuntime.load(OnnxRuntime.java:174) at ai.onnxruntime.OnnxRuntime.init(OnnxRuntime.java:81) at ai.onnxruntime.OrtEnvironment.<clinit>(OrtEnvironment.java:24) * Create Java publishing pipeline (#3944) Create CPU and GPu Java publishing pipelines. Final jars are tested on all platforms. However, signing and publishing to maven are manual steps. * Change group id to com.microsoft.onnxruntime per requirements. * Java GPu artifact naming (#4179) Modify gradle build so artifactID has _gpu for GPU builds. Pass USE_CUDA flag on CUDA build Adjust publishing pipelines to extract POM from a correct path. Co-Authored-By: @Craigacp * bump up ORT version to 1.3.1 (#4181) * move back to toolset 14.16 to possibly work around nvcc bug (#4180) * Symbolic shape inference exit on models without onnx opset used (#4090) * Symbolic shape inference exit on models without onnx opset used * Temporary fix for ConvTranspose with symbolic input dims Co-authored-by: Changming Sun <me@sunchangming.com> * Fix Nuphar test failure * Enlarge the read buffer size in C#/Java test code (#4150) 1. Enlarge the read buffer size further, so that our code can run even faster. TODO: need apply the similar changes to python some other language bindings. 2. Add coreml_VGG16_ImageNet to the test exclusion set of x86_32. It is not a new model but previously we didn't run the test against x86_32. * Temporarily disable windows static analysis CI job * skip model coreml_Imputer-LogisticRegression_sklearn_load_breast_cancer * Delete unused variable Co-authored-by: Prasanth Pulavarthi <prasantp@microsoft.com> Co-authored-by: Yulong Wang <yulongw@microsoft.com> Co-authored-by: Adam Pocock <adam.pocock@oracle.com> Co-authored-by: jji2019 <49252772+jji2019@users.noreply.github.com> Co-authored-by: Dmitri Smirnov <yuslepukhin@users.noreply.github.com> Co-authored-by: Dmitri Smirnov <dmitrism@microsoft.com> Co-authored-by: George Wu <jywu@microsoft.com> Co-authored-by: KeDengMS <kedeng@microsoft.com> Co-authored-by: Changming Sun <me@sunchangming.com> Co-authored-by: Changming Sun <chasun@microsoft.com>
3.4 KiB
Versioning
API
ONNX Runtime follows Semantic Versioning 2.0 for its public API. Each release has the form MAJOR.MINOR.PATCH, adhering to the definitions from the linked semantic versioning doc.
Current stable release version
The version number of the current stable release can be found here.
Release cadence
Compatibility
Backwards compatibility
All versions of ONNX Runtime will support ONNX opsets all the way back to (and including) opset version 7. In other words if an ONNX Runtime release implements ONNX opset ver 9, it'll be able to run all models that are stamped with ONNX opset versions in the range [7-9].
Version matrix
Following table summarizes the relationship between the ONNX Runtime version and the ONNX opset version implemented in that release. Please note the Backwards and Forward compatibility notes above. For more details on ONNX Release versions, see this page.
| ONNX Runtime release version | ONNX release version | ONNX opset version | ONNX ML opset version | Supported ONNX IR version | WinML compatibility |
|---|---|---|---|---|---|
| 1.3.1 | 1.7 down to 1.2 | 12 | 2 | 6 | -- |
| 1.3.0 | 1.7 down to 1.2 | 12 | 2 | 6 | -- |
| 1.2.0 1.1.2 1.1.1 1.1.0 |
1.6 down to 1.2 | 11 | 2 | 6 | -- |
| 1.0.0 | 1.6 down to 1.2 | 11 | 2 | 6 | -- |
| 0.5.0 | 1.5 down to 1.2 | 10 | 1 | 5 | -- |
| 0.4.0 | 1.5 down to 1.2 | 10 | 1 | 5 | -- |
| 0.3.1 0.3.0 |
1.4 down to 1.2 | 9 | 1 | 3 | -- |
| 0.2.1 0.2.0 |
1.3 down to 1.2 | 8 | 1 | 3 | 1903 (19H1)+ |
| 0.1.5 0.1.4 |
1.3 down to 1.2 | 8 | 1 | 3 | 1809 (RS5)+ |
Tool Compatibility
A variety of tools can be used to create ONNX models. Unless otherwise noted, please use the latest released version of the tools to convert/export the ONNX model. Most tools are backwards compatible and support multiple ONNX versions. Join this with the table above to evaluate ONNX Runtime compatibility.
| Tool | Recommended Version | Supported ONNX version(s) |
|---|---|---|
| PyTorch | Latest stable | 1.2-1.6 |
| ONNXMLTools CoreML, LightGBM, XGBoost, LibSVM |
Latest stable | 1.2-1.6 |
| ONNXMLTools SparkML |
Latest stable | 1.4-1.5 |
| SKLearn-ONNX | Latest stable | 1.2-1.6 |
| Keras-ONNX | Latest stable | 1.2-1.6 |
| Tensorflow-ONNX | Latest stable | 1.2-1.6 |
| WinMLTools | Latest stable | 1.2-1.6 |
| AutoML | 1.0.39+ | 1.5 |
| 1.0.33 | 1.4 |