ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
Find a file
Adrian Lizarraga 0dda8b0c44
[QNN EP] Update QNN SDK to 2.21 (#20534)
### Description
- Updates QNN pipelines to use QNN SDK 2.21
- Downloads QNN SDK from Azure storage to avoid having to rebuild images
when a new version is released.


### Motivation and Context
Test with the latest QNN SDK.
2024-05-01 20:17:35 -07:00
.config Update tsaoptions.json: update the email alias (#13448) 2022-10-26 15:56:16 -07:00
.devcontainer Remove two lines in the Dockerfile for Github Codespace (#12278) 2022-07-21 20:52:17 -07:00
.gdn Update win-ci-pipeline.yml: enable xnnpack tests (#16244) 2023-06-14 19:12:42 -07:00
.github Bump gradle/wrapper-validation-action from 2 to 3 (#20305) 2024-04-16 14:20:51 -07:00
.pipelines Update DML to 1.14.1 (#20380) 2024-04-18 22:43:41 -07:00
.vscode disable gemm f16 on CPU (#19744) 2024-03-01 13:44:29 -08:00
cgmanifests upgrade emsdk to 3.1.57 (#20295) 2024-04-19 23:05:18 -07:00
cmake Remove usage of 'required reason' iOS API from protobuf (#20529) 2024-05-02 08:21:08 +10:00
csharp Bump up version in main from 1.18.0 to 1.19.0 (#20489) 2024-04-29 20:21:41 -07:00
dockerfiles OpenVINO EP Rel 1.18 Changes (#20337) 2024-04-19 00:31:38 -07:00
docs [CUDA] Add SparseAttention operator for Phi-3-small (#20216) 2024-04-30 09:06:29 -07:00
include/onnxruntime/core update onnxruntime_c_api.h (#20360) 2024-04-30 16:47:24 -07:00
java [java][DML EP] Modifying dml_provider_factory.h so it can compile as a C header file (#20157) 2024-04-01 21:58:50 -07:00
js Bump up version in main from 1.18.0 to 1.19.0 (#20489) 2024-04-29 20:21:41 -07:00
objectivec Fix Objective-C static analysis warnings. (#20417) 2024-04-24 11:48:29 -07:00
onnxruntime [QNN EP] Update QNN SDK to 2.21 (#20534) 2024-05-01 20:17:35 -07:00
orttraining Fuse Cast + SoftmaxCrossEntropyLossInternal (#20334) 2024-04-29 14:12:10 +08:00
rust Fix rust compile issues and add GH action to run build validations and tests (#18346) 2023-11-09 04:26:02 -08:00
samples Removed all the deprecated python training code and related tests and utils (#18333) 2023-11-17 18:19:21 -08:00
tools [QNN EP] Update QNN SDK to 2.21 (#20534) 2024-05-01 20:17:35 -07:00
winml [DML EP] Add GroupQueryAttention (#20327) 2024-04-19 10:25:29 -07:00
.clang-format Prevent GSL_SUPPRESS arguments from being modified by clang-format (#17242) 2023-08-22 18:26:53 -07:00
.clang-tidy Create clang-tidy CI (#12653) 2022-09-30 08:05:38 -07:00
.dockerignore
.gitattributes
.gitignore Build onnxruntime.dll as arm64x (#18633) 2023-12-06 16:49:00 -08:00
.gitmodules upgrade emsdk to 3.1.57 (#20295) 2024-04-19 23:05:18 -07:00
.lintrunner.toml Support >2GB of Tensor data in training checkpoint (#20077) 2024-04-22 15:17:43 -07:00
build.bat try to find patch.exe in git default installation folder (#17106) 2023-08-10 21:48:13 -07:00
build.sh Upgrade old Python version in packaging pipeline (#16667) 2023-07-17 08:24:47 -07:00
build_arm64x.bat remove unnecessary environment variable (#19166) 2024-01-16 16:24:37 -08:00
CITATION.cff Fix citation author name issue (#19597) 2024-02-22 17:03:56 -08:00
CODEOWNERS Add owners for public facing API files (#15288) 2023-03-30 17:16:15 -07:00
CONTRIBUTING.md Fix link to High Level Design (#11786) 2023-02-28 11:05:54 -08:00
lgtm.yml Fix lgtm C++ error (#13613) 2022-11-10 10:06:22 -08:00
LICENSE
NuGet.config
ort.wprp ORT ETW dynamic logging that improves ORT diagnosability & performance (#18882) 2024-01-11 12:43:27 -08:00
ORT_icon_for_light_bg.png
packages.config Update DML to 1.14.1 (#20380) 2024-04-18 22:43:41 -07:00
pyproject.toml [CUDA] Add SparseAttention operator for Phi-3-small (#20216) 2024-04-30 09:06:29 -07:00
README.md Update README.md (#18963) 2024-01-03 17:26:25 -08:00
requirements-dev.txt ONNX 1.15 integration (#17125) 2023-09-26 14:44:48 -07:00
requirements-doc.txt
requirements-lintrunner.txt Bump ruff to 0.3.2 and black to 24 (#19878) 2024-03-13 10:00:32 -07:00
requirements-training.txt ONNX 1.15 integration (#17125) 2023-09-26 14:44:48 -07:00
requirements.txt.in
SECURITY.md Microsoft mandatory file (#11619) 2022-05-25 13:56:10 -07:00
setup.py [qnn ep] include qnn sdk in onnxruntime-qnn python whl (#20485) 2024-04-29 09:44:54 -07:00
ThirdPartyNotices.txt Fix HalideIR title in third party notices reference (#20190) 2024-04-05 11:12:43 -07:00
VERSION_NUMBER Bump up version in main from 1.18.0 to 1.19.0 (#20489) 2024-04-29 20:21:41 -07:00

ONNX Runtime is a cross-platform inference and training machine-learning accelerator.

ONNX Runtime inference can enable faster customer experiences and lower costs, supporting models from deep learning frameworks such as PyTorch and TensorFlow/Keras as well as classical machine learning libraries such as scikit-learn, LightGBM, XGBoost, etc. ONNX Runtime is compatible with different hardware, drivers, and operating systems, and provides optimal performance by leveraging hardware accelerators where applicable alongside graph optimizations and transforms. Learn more →

ONNX Runtime training can accelerate the model training time on multi-node NVIDIA GPUs for transformer models with a one-line addition for existing PyTorch training scripts. Learn more →

Get Started & Resources

Builtin Pipeline Status

System Inference Training
Windows Build Status
Build Status
Build Status
Linux Build Status
Build Status
Build Status
Build Status
Build Status
Build Status
Build Status
Build Status
Mac Build Status
Android Build Status
iOS Build Status
Web Build Status
Other Build Status

Third-party Pipeline Status

System Inference Training
Linux Build Status

Data/Telemetry

Windows distributions of this project may collect usage data and send it to Microsoft to help improve our products and services. See the privacy statement for more details.

Contributions and Feedback

We welcome contributions! Please see the contribution guidelines.

For feature requests or bug reports, please file a GitHub Issue.

For general discussion or questions, please use GitHub Discussions.

Code of Conduct

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.

License

This project is licensed under the MIT License.