ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
Find a file
pengwa 9765ef8b4e
fix build warnings (#11213)
* fix build warning
2022-04-18 21:09:09 +08:00
.config A new pipeline to replace the existing WindowsAI packaging pipeline (#10646) 2022-03-03 08:56:49 -08:00
.gdn Update compliance tasks in python packaging pipeline and fix some compile warnings (#8471) 2021-07-30 17:16:37 -07:00
.github Fix CITATION.cff and add automatic validation of your citation metadata (#10478) 2022-04-13 10:03:52 -07:00
.pipelines A new pipeline to replace the existing WindowsAI packaging pipeline (#10646) 2022-03-03 08:56:49 -08:00
cgmanifests [TVM EP] code refactor (#10655) 2022-03-16 13:55:04 +01:00
cmake One dnn v2.6 update (#11220) 2022-04-15 12:51:11 -07:00
csharp [CUDA] Optimize Conv and ConvGrad for Training (#10999) 2022-03-29 07:31:36 +08:00
dockerfiles OpenVINO-EP v4.0 Release PR with OpenVINO 2022.1 (#11025) 2022-04-06 13:30:33 -07:00
docs update How_To_Update_ONNX_Dev_Notes with right paths (#11074) 2022-04-01 08:05:31 -07:00
include/onnxruntime/core add AppendExecutionProvider_CUDA_V2 to the C++ api (#11153) 2022-04-14 17:33:27 -07:00
java [Java] Support configuring CUDA and TensorRT execution providers (#10697) 2022-03-30 14:26:51 -07:00
js Bump electron from 12.2.3 to 13.6.6 in /js/web (#10978) 2022-04-11 12:51:56 -07:00
objectivec Remove unnecessary option from convert_onnx_models_to_ort.py, fix old instructions. (#11088) 2022-04-11 11:19:21 -07:00
onnxruntime fix build warnings (#11213) 2022-04-18 21:09:09 +08:00
orttraining fix build warnings (#11213) 2022-04-18 21:09:09 +08:00
package/rpm Bump master version to 1.12 (#10797) 2022-03-28 12:30:11 -07:00
samples Add Python checks pipeline (#7032) 2021-08-09 10:37:05 -07:00
server [TVM EP] Rename Standalone TVM (STVM) Execution Provider to TVM EP (#10260) 2022-02-15 10:21:02 +01:00
tools use int storage (#11185) 2022-04-15 09:56:36 +08:00
winml Add multi-dim dft test, and fix complex idft (#10947) 2022-03-22 10:08:12 -07:00
.clang-format
.clang-tidy
.dockerignore
.flake8 Add Python checks pipeline (#7032) 2021-08-09 10:37:05 -07:00
.gitattributes
.gitignore Remove unused pipeline orttraining-linux-gpu-perf-test-ci-pipeline.yml and unused send_perf_metrics tool. (#10326) 2022-01-21 14:31:34 -08:00
.gitmodules Upgrade emsdk to 3.1.3 (#10577) 2022-02-28 23:52:41 -08:00
build.amd64.1411.bat
build.bat
build.sh
CITATION.cff Fix CITATION.cff and add automatic validation of your citation metadata (#10478) 2022-04-13 10:03:52 -07:00
CODEOWNERS Update to use teams instead of individual GH handles (#11163) 2022-04-12 12:06:12 -07:00
CONTRIBUTING.md minor improvements to CONTRIBUTING doc (#11080) 2022-04-12 15:22:34 -07:00
LICENSE
NuGet.config
ort.wprp
ORT_icon_for_light_bg.png Update nuget icon (#10672) 2022-03-01 09:11:03 -08:00
packages.config Bump winrt version (#10243) 2022-01-12 10:52:27 -08:00
README.md Fix typo 2021-08-12 15:57:15 -07:00
requirements-dev.txt
requirements-doc.txt
requirements-training.txt
requirements.txt.in
setup.py Add new python helper dirs to wheel. (#11196) 2022-04-13 13:34:07 +10:00
ThirdPartyNotices.txt add copyright (#9943) (#9970) 2021-12-08 14:34:53 -08:00
VERSION_NUMBER Bump master version to 1.12 (#10797) 2022-03-28 12:30:11 -07:00

ONNX Runtime is a cross-platform inference and training machine-learning accelerator.

ONNX Runtime inference can enable faster customer experiences and lower costs, supporting models from deep learning frameworks such as PyTorch and TensorFlow/Keras as well as classical machine learning libraries such as scikit-learn, LightGBM, XGBoost, etc. ONNX Runtime is compatible with different hardware, drivers, and operating systems, and provides optimal performance by leveraging hardware accelerators where applicable alongside graph optimizations and transforms. Learn more →

ONNX Runtime training can accelerate the model training time on multi-node NVIDIA GPUs for transformer models with a one-line addition for existing PyTorch training scripts. Learn more →

Get Started

General Information: onnxruntime.ai

Usage documention and tutorials: onnxruntime.ai/docs

Companion sample repositories:

Build Pipeline Status

System CPU GPU EPs
Windows Build Status Build Status Build Status
Linux Build Status
Build Status
Build Status
Build Status
Build Status
Build Status
Build Status
Build Status
Build Status
Build Status
Build Status
Mac Build Status
Build Status
Android Build Status
iOS Build Status
WebAssembly Build Status

Data/Telemetry

Windows distributions of this project may collect usage data and send it to Microsoft to help improve our products and services. See the privacy statement for more details.

Contributions and Feedback

We welcome contributions! Please see the contribution guidelines.

For feature requests or bug reports, please file a GitHub Issue.

For general discussion or questions, please use GitHub Discussions.

Code of Conduct

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.

License

This project is licensed under the MIT License.