ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
Find a file
Xu Xing 0c8c0014f6
[js/webgpu] Use builtin num_workgroups to fix shader key conflict (#18387)
This fixes conformance failure of tinyyolov2-8 and potential shader key
conflict issues.
2023-11-10 17:37:45 -08:00
.config
.devcontainer
.gdn Update win-ci-pipeline.yml: enable xnnpack tests (#16244) 2023-06-14 19:12:42 -07:00
.github Update stale.yml to fix start-date bug (#18376) 2023-11-09 16:04:31 -08:00
.pipelines Bump DirectML version from 1.12.0 to 1.12.1 (#17225) 2023-08-20 09:55:38 -07:00
.vscode Close the JSON object in settings.json (#17583) 2023-09-26 09:51:13 -07:00
cgmanifests Add a build validation for Linux ARM64 cross-compile (#18200) 2023-11-08 13:03:18 -08:00
cmake Fix ability to use patch on Windows CI machines (#18356) 2023-11-11 07:32:14 +10:00
csharp Rework/cleanup the C# build infrastructure for nuget packages. (#18127) 2023-11-03 09:05:17 -07:00
dockerfiles Update dockerfiles/Dockerfile.source to avoid installing onnx (#17975) 2023-10-20 09:24:21 -07:00
docs add bfloat16 support for CUDA Neg kernel (#18306) 2023-11-08 18:32:12 -08:00
include/onnxruntime/core Add cuda context headers to zip (#18330) 2023-11-09 14:53:58 -08:00
java [java] Make the backing byte buffer in an OrtValue accessible (#16578) 2023-10-17 10:03:49 -07:00
js [js/webgpu] Use builtin num_workgroups to fix shader key conflict (#18387) 2023-11-10 17:37:45 -08:00
objectivec Objective-C Add Support to Create and Query String ORTValues (#16764) 2023-07-20 17:39:29 -07:00
onnxruntime [QNN EP/Quantization] Add MinimumRealRange extra option to quantization script (#18278) 2023-11-09 10:55:09 -08:00
orttraining Add FlattenAndUnpad Op (#17845) 2023-11-09 09:52:48 +08:00
rust Fix rust compile issues and add GH action to run build validations and tests (#18346) 2023-11-09 04:26:02 -08:00
samples [Linter] Bump ruff and remove pylint (#17797) 2023-10-05 21:07:33 -07:00
tools [js/web] fix typescript type check (#18343) 2023-11-10 16:03:38 -08:00
winml Bump linter versions (#18341) 2023-11-08 13:04:40 -08:00
.clang-format Prevent GSL_SUPPRESS arguments from being modified by clang-format (#17242) 2023-08-22 18:26:53 -07:00
.clang-tidy
.dockerignore
.gitattributes
.gitignore
.gitmodules Remove onnxruntime extensions from list of gitmodules (#17615) 2023-09-19 17:12:14 -07:00
.lintrunner.toml FP16 optimizer automatically detect DeepSpeed compatibility (#18084) 2023-10-25 15:11:02 +08:00
build.bat try to find patch.exe in git default installation folder (#17106) 2023-08-10 21:48:13 -07:00
build.sh Upgrade old Python version in packaging pipeline (#16667) 2023-07-17 08:24:47 -07:00
CITATION.cff
CODEOWNERS
CONTRIBUTING.md
lgtm.yml
LICENSE
NuGet.config
ort.wprp
ORT_icon_for_light_bg.png
packages.config Bump DirectML version from 1.12.0 to 1.12.1 (#17225) 2023-08-20 09:55:38 -07:00
pyproject.toml [ORTModule] ATen Efficient Attention and Triton Flash Attention (#17959) 2023-10-27 10:29:27 +08:00
README.md
requirements-dev.txt ONNX 1.15 integration (#17125) 2023-09-26 14:44:48 -07:00
requirements-doc.txt
requirements-lintrunner.txt Bump linter versions (#18341) 2023-11-08 13:04:40 -08:00
requirements-training.txt ONNX 1.15 integration (#17125) 2023-09-26 14:44:48 -07:00
requirements.txt.in
SECURITY.md
setup.py [ROCm] update rocm package exclude libs (#18130) 2023-10-31 08:41:01 +08:00
ThirdPartyNotices.txt Flash Attention v2 MHA (#17227) 2023-08-31 13:52:21 -07:00
VERSION_NUMBER Bump Up Version to 1.17.0 (#17587) 2023-09-20 11:02:58 +08:00

ONNX Runtime is a cross-platform inference and training machine-learning accelerator.

ONNX Runtime inference can enable faster customer experiences and lower costs, supporting models from deep learning frameworks such as PyTorch and TensorFlow/Keras as well as classical machine learning libraries such as scikit-learn, LightGBM, XGBoost, etc. ONNX Runtime is compatible with different hardware, drivers, and operating systems, and provides optimal performance by leveraging hardware accelerators where applicable alongside graph optimizations and transforms. Learn more →

ONNX Runtime training can accelerate the model training time on multi-node NVIDIA GPUs for transformer models with a one-line addition for existing PyTorch training scripts. Learn more →

Get Started & Resources

Builtin Pipeline Status

System Inference Training
Windows Build Status
Build Status
Build Status
Linux Build Status
Build Status
Build Status
Build Status
Build Status
Build Status
Build Status
Build Status
Mac Build Status
Android Build Status
iOS Build Status
Web Build Status
Other Build Status
Build Status

Third-party Pipeline Status

System Inference Training
Linux Build Status

Data/Telemetry

Windows distributions of this project may collect usage data and send it to Microsoft to help improve our products and services. See the privacy statement for more details.

Contributions and Feedback

We welcome contributions! Please see the contribution guidelines.

For feature requests or bug reports, please file a GitHub Issue.

For general discussion or questions, please use GitHub Discussions.

Code of Conduct

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.

License

This project is licensed under the MIT License.