ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
Find a file
Vincent Wang 5ecfaef042
ATen Fallback for Inference (#11597)
* aten op for inference

* fix build error

* more some code to training only

* remove domain from operator name

* move aten_op_executor ext out from ortmodule

* add pipeline

* add exec mode

* fix script

* fix ut script

* fix test pipeline

* failure test

* rollback

* bugfix

* resolve comments

* enable aten for python build only

* fix win build

* use target_compile_definitions

* support io binding

* turn off aten by default

* fix ut

Co-authored-by: Vincent Wang <weicwang@microsoft.com>
Co-authored-by: zhijxu <zhijxu@microsoft.com>
2022-06-09 16:07:30 +08:00
.config A new pipeline to replace the existing WindowsAI packaging pipeline (#10646) 2022-03-03 08:56:49 -08:00
.gdn Update compliance tasks in python packaging pipeline and fix some compile warnings (#8471) 2021-07-30 17:16:37 -07:00
.github Remove the redundant black check in CI (#11790) 2022-06-08 16:58:43 -07:00
.pipelines Update DirectML from 1.8.0 to 1.8.2 (#11459) 2022-05-06 17:52:52 -07:00
.vscode Add python static type checking in CI checks (#11518) 2022-05-16 13:26:56 -07:00
cgmanifests [TVM EP] update set input method for VirtualMachine (#11674) 2022-06-04 09:31:01 +02:00
cmake ATen Fallback for Inference (#11597) 2022-06-09 16:07:30 +08:00
csharp Snpe ep (#11665) 2022-06-03 14:10:02 -07:00
dockerfiles fix cmake warning (#11742) 2022-06-07 09:37:16 +08:00
docs [TVM EP][DOC] Documentation update for TVM EP due to the addition of precompiled model support. (#11743) 2022-06-08 14:56:01 +02:00
include/onnxruntime/core ATen Fallback for Inference (#11597) 2022-06-09 16:07:30 +08:00
java Update protobuf-java to 3.20.1 (#10420) 2022-05-11 07:52:12 -07:00
js Bump protobufjs from 6.10.2 to 6.11.3 in /js/node (#11722) 2022-06-08 11:17:56 -07:00
objectivec Format all python files under onnxruntime with black and isort (#11324) 2022-04-26 09:35:16 -07:00
onnxruntime ATen Fallback for Inference (#11597) 2022-06-09 16:07:30 +08:00
orttraining ATen Fallback for Inference (#11597) 2022-06-09 16:07:30 +08:00
package/rpm Bump master version to 1.12 (#10797) 2022-03-28 12:30:11 -07:00
samples Format all python files under onnxruntime with black and isort (#11324) 2022-04-26 09:35:16 -07:00
tools ATen Fallback for Inference (#11597) 2022-06-09 16:07:30 +08:00
winml Update signal op defs to match onnx17 defs, and add more tests (#11631) 2022-05-28 16:00:09 -07:00
.clang-format
.clang-tidy
.dockerignore
.flake8 Fix torch cpp ext build when CPU wheel is installed but GPU card is present (#11608) 2022-05-25 09:44:26 -04:00
.gitattributes
.gitignore Add python docstring linting in vscode settings (#11316) 2022-04-23 06:23:04 -07:00
.gitmodules XNNPACK EP (#11445) 2022-06-03 20:22:34 +10:00
build.amd64.1411.bat
build.bat
build.sh
CITATION.cff Fix CITATION.cff and add automatic validation of your citation metadata (#10478) 2022-04-13 10:03:52 -07:00
CODEOWNERS Update to use teams instead of individual GH handles (#11163) 2022-04-12 12:06:12 -07:00
CONTRIBUTING.md minor improvements to CONTRIBUTING doc (#11080) 2022-04-12 15:22:34 -07:00
lgtm.yml Add LGTM config for c++ and c# (#11365) 2022-04-27 10:51:40 -07:00
LICENSE
NuGet.config
ort.wprp
ORT_icon_for_light_bg.png Update nuget icon (#10672) 2022-03-01 09:11:03 -08:00
packages.config Update DirectML from 1.8.0 to 1.8.2 (#11459) 2022-05-06 17:52:52 -07:00
pyproject.toml Add python static type checking in CI checks (#11518) 2022-05-16 13:26:56 -07:00
README.md Add OpenVINO Pipeline Status to README (#11299) 2022-04-21 15:59:50 -07:00
requirements-dev.txt Introduce parameterized as a dev dependency (#11364) 2022-04-26 17:24:39 -07:00
requirements-doc.txt
requirements-training.txt Add post-install command to build PyTorch CPP extensions from within onnxruntime package (#8027) 2021-06-28 18:11:58 -07:00
requirements.txt.in Add additional python requirements (#11522) 2022-05-20 16:16:18 -07:00
SECURITY.md Microsoft mandatory file (#11619) 2022-05-25 13:56:10 -07:00
setup.py Update setup.py to include config files used by model analysis in wheel. (#11381) 2022-04-28 16:13:26 +10:00
ThirdPartyNotices.txt add copyright (#9943) (#9970) 2021-12-08 14:34:53 -08:00
VERSION_NUMBER Bump master version to 1.12 (#10797) 2022-03-28 12:30:11 -07:00

ONNX Runtime is a cross-platform inference and training machine-learning accelerator.

ONNX Runtime inference can enable faster customer experiences and lower costs, supporting models from deep learning frameworks such as PyTorch and TensorFlow/Keras as well as classical machine learning libraries such as scikit-learn, LightGBM, XGBoost, etc. ONNX Runtime is compatible with different hardware, drivers, and operating systems, and provides optimal performance by leveraging hardware accelerators where applicable alongside graph optimizations and transforms. Learn more →

ONNX Runtime training can accelerate the model training time on multi-node NVIDIA GPUs for transformer models with a one-line addition for existing PyTorch training scripts. Learn more →

Get Started

General Information: onnxruntime.ai

Usage documention and tutorials: onnxruntime.ai/docs

Companion sample repositories:

Build Pipeline Status

System CPU GPU EPs
Windows Build Status Build Status Build Status
Linux Build Status
Build Status
Build Status
Build Status
Build Status
Build Status
Build Status
Build Status
Build Status
Build Status
Build Status
Mac Build Status
Build Status
Android Build Status
iOS Build Status
WebAssembly Build Status

Data/Telemetry

Windows distributions of this project may collect usage data and send it to Microsoft to help improve our products and services. See the privacy statement for more details.

Contributions and Feedback

We welcome contributions! Please see the contribution guidelines.

For feature requests or bug reports, please file a GitHub Issue.

For general discussion or questions, please use GitHub Discussions.

Code of Conduct

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.

License

This project is licensed under the MIT License.