ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
Find a file
2024-08-01 12:21:16 -07:00
.config
.devcontainer
.gdn
.github Update labeling bot (#21548) 2024-07-29 16:06:03 -07:00
.pipelines Update DirectML from 1.14.1 to 1.15.0 (#21323) 2024-07-22 16:59:03 -07:00
.vscode disable gemm f16 on CPU (#19744) 2024-03-01 13:44:29 -08:00
cgmanifests [TensorRT EP] Update TRT OSS Parser to 10.2 (#21552) 2024-07-29 17:27:38 -07:00
cmake Add CUDA custom op header files to Linux tarball (#21551) 2024-08-01 04:23:02 -07:00
csharp Bump Sixlabors.ImageSharp from 2.1.8 to 2.1.9 in /csharp/sample/Microsoft.ML.OnnxRuntime.ResNet50v2Sample (#21444) 2024-07-26 22:31:16 -07:00
dockerfiles ORT- OVEP 1.19 PR-follow up (#21546) 2024-07-29 14:12:36 -07:00
docs Add reduce kernels for bigger types (#21490) 2024-08-01 12:21:16 -07:00
include/onnxruntime/core Utilize ext data location to reduce qd matmul memory usage (#21451) 2024-07-30 15:22:46 -07:00
java Fix typos according to reviewdog report. (#21335) 2024-07-22 13:37:32 -07:00
js [WebNN EP] Support ConvTranspose for TFLite backend (#21291) 2024-07-30 17:46:08 -07:00
objectivec Fix Objective-C static analysis warnings. (#20417) 2024-04-24 11:48:29 -07:00
onnxruntime Add reduce kernels for bigger types (#21490) 2024-08-01 12:21:16 -07:00
orttraining pick changes from https://github.com/onnx/onnx/pull/6195 to fix heap-buffer-overflow in onnx::convPoolShapeInference (#21507) 2024-07-27 15:58:36 -07:00
rust Fix typos according to reviewdog report. (#21335) 2024-07-22 13:37:32 -07:00
samples
tools Bump torch from 1.13.1 to 2.2.0 in /tools/ci_build/github/windows/eager (#21505) 2024-08-01 04:28:43 -07:00
winml Update ruff and clang-format versions (#21479) 2024-07-24 11:50:11 -07:00
.clang-format
.clang-tidy
.dockerignore
.gitattributes Fix typos according to reviewdog report. (#21335) 2024-07-22 13:37:32 -07:00
.gitignore Build onnxruntime.dll as arm64x (#18633) 2023-12-06 16:49:00 -08:00
.gitmodules [js/web] optimize module export and deployment (#20165) 2024-05-20 09:51:16 -07:00
.lintrunner.toml CoreML: Aggregated changes to add all required ops for priority model (#21472) 2024-07-26 08:29:33 +10:00
build.bat
build.sh
build_arm64x.bat remove unnecessary environment variable (#19166) 2024-01-16 16:24:37 -08:00
CITATION.cff Fix citation author name issue (#19597) 2024-02-22 17:03:56 -08:00
CODEOWNERS
CONTRIBUTING.md
lgtm.yml
LICENSE
NuGet.config
ort.wprp Fully dynamic ETW controlled logging for ORT and QNN logs (#20537) 2024-06-06 21:11:14 -07:00
ORT_icon_for_light_bg.png
packages.config Update DirectML from 1.14.1 to 1.15.0 (#21323) 2024-07-22 16:59:03 -07:00
pyproject.toml Ignore ruff rule N813 (#21477) 2024-07-24 17:48:22 -07:00
README.md Update README.md (#18963) 2024-01-03 17:26:25 -08:00
requirements-dev.txt
requirements-doc.txt
requirements-lintrunner.txt Update ruff and clang-format versions (#21479) 2024-07-24 11:50:11 -07:00
requirements-training.txt
requirements.txt Add compatibility for NumPy 2.0 (#21085) 2024-06-27 13:50:53 -07:00
SECURITY.md
setup.py Migraphx ep windows build (#21284) 2024-07-11 21:21:38 -07:00
ThirdPartyNotices.txt Fix typos according to reviewdog report. (#21335) 2024-07-22 13:37:32 -07:00
VERSION_NUMBER Bump up version in main from 1.18.0 to 1.19.0 (#20489) 2024-04-29 20:21:41 -07:00

ONNX Runtime is a cross-platform inference and training machine-learning accelerator.

ONNX Runtime inference can enable faster customer experiences and lower costs, supporting models from deep learning frameworks such as PyTorch and TensorFlow/Keras as well as classical machine learning libraries such as scikit-learn, LightGBM, XGBoost, etc. ONNX Runtime is compatible with different hardware, drivers, and operating systems, and provides optimal performance by leveraging hardware accelerators where applicable alongside graph optimizations and transforms. Learn more →

ONNX Runtime training can accelerate the model training time on multi-node NVIDIA GPUs for transformer models with a one-line addition for existing PyTorch training scripts. Learn more →

Get Started & Resources

Builtin Pipeline Status

System Inference Training
Windows Build Status
Build Status
Build Status
Linux Build Status
Build Status
Build Status
Build Status
Build Status
Build Status
Build Status
Build Status
Mac Build Status
Android Build Status
iOS Build Status
Web Build Status
Other Build Status

Third-party Pipeline Status

System Inference Training
Linux Build Status

Data/Telemetry

Windows distributions of this project may collect usage data and send it to Microsoft to help improve our products and services. See the privacy statement for more details.

Contributions and Feedback

We welcome contributions! Please see the contribution guidelines.

For feature requests or bug reports, please file a GitHub Issue.

For general discussion or questions, please use GitHub Discussions.

Code of Conduct

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.

License

This project is licensed under the MIT License.