### Description This PR is to update the win-ort-main branch to the tip main branch as of 2025-01-23. ### PR List ddf0d377a7 [QNN EP] Add LoggingManager::HasDefaultLogger() to provider bridge API (#23467) 05fbbdf91f [QNN EP] Make QNN EP a shared library (#23120) 1336566d7f Add custom vcpkg ports (#23456) 2e1173c411 Update the compile flags for vcpkg packages (#23455) 1f628a9858 [Mobile] Add BrowserStack Android MAUI Test (#23383) 009cae0ec8 [js/webgpu] Optimize ConvTranspose (Continue) (#23429) 04a4a694cb Use onnx_protobuf.h to suppress some GCC warnings (#23453) 2e3b62b4b0 Suppress some strict-aliasing related warnings in WebGPU EP (#23454) b708f9b1dc Bump ruff from 0.9.1 to 0.9.2 (#23427) c0afc66b2a [WebNN] Remove workarounds for TFLite backend (#23406) 8a821ff7f9 Bump vite from 6.0.7 to 6.0.11 in /js/web/test/e2e/exports/testcases/vite-default (#23446) 220c1a203e Make ORT and Dawn use the same protobuf/abseil source code (#23447) b7b5792147 Change MacOS-13 to ubuntu on for android-java-api-aar-test.yml. (#23444) 19d0d2a30f WIP: Dp4MatMulNBits accuracy level 4 matmul for WebGPU EP (#23365) 95b8effbc4 [QNN EP]: Clean up QNN logging resources if an error occurs during initialization (#23435) 626134c5b5 Bump clang-format from 19.1.6 to 19.1.7 (#23428) 0cf975301f Fix eigen external deps (#23439) f9440aedce Moving RN_CI Android Testing to Linux (#23422) 1aa5902ff4 [QNN EP] workaround for QNN validation bug for Tanh with uint16 quantized output (#23432) 7f5582a0e2 Seperate RN andriod and IOS into 2 separated Stages. (#23400) 73deac2e7f Implement some missing element wise Add/Sub/Mul/Div/Neg operations for CPU and CUDA EPs (#23090) 949fe42af4 Upgrade Java version from react-native/android to Java 17 (#23066) 0892c23463 Update Qnn SDK default version to 2.30 (#23411) 94c099bcec Fix type cast build error (#23423) d633e571d1 [WebNN EP] Fix AddInitializersToSkip issues (#23354) e988ef00e2 [QNN EP] Fix regression for MatMul with two quantized/dynamic uint16 inputs (#23419) 7538795f6b Update onnxruntime binary size checks ci pipeline's docker image (#23405) 6c5ea41cad Revert "[QNN EP] Clean up correctly from a partial setup (#23320)" (#23420) e866804bbe Enable comprehension simplification in ruff rules (#23414) 0a5f1f392c bugfix: string_view of invalid memory (#23417) 4cc38e0277 fix crash when first input of BatchNormalization is 1-D (#23387) 033441487f Target py310 and modernize codebase with ruff (#23401) 87341ac010 [QNN EP] Fix segfault when unregistering HTP shared memory handles (#23402) ### Motivation and Context This update includes the change to make QNN-EP a shared library. --------- Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: Adrian Lizarraga <adlizarraga@microsoft.com> Co-authored-by: Justin Chu <justinchuby@users.noreply.github.com> Co-authored-by: Yulong Wang <7679871+fs-eire@users.noreply.github.com> Co-authored-by: Edward Chen <18449977+edgchen1@users.noreply.github.com> Co-authored-by: Changming Sun <chasun@microsoft.com> Co-authored-by: Peishen Yan <peishen.yan@intel.com> Co-authored-by: Tianlei Wu <tlwu@microsoft.com> Co-authored-by: Hector Li <hecli@microsoft.com> Co-authored-by: Jian Chen <cjian@microsoft.com> Co-authored-by: Alexis Tsogias <1114095+Zyrin@users.noreply.github.com> Co-authored-by: junchao-zhao <68935141+junchao-loongson@users.noreply.github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: sushraja-msft <44513542+sushraja-msft@users.noreply.github.com> Co-authored-by: Wanming Lin <wanming.lin@intel.com> Co-authored-by: Jiajia Qin <jiajiaqin@microsoft.com> Co-authored-by: Caroline Zhu <wolfivyaura@gmail.com> |
||
|---|---|---|
| .config | ||
| .devcontainer | ||
| .gdn | ||
| .github | ||
| .pipelines | ||
| .vscode | ||
| cgmanifests | ||
| cmake | ||
| csharp | ||
| dockerfiles | ||
| docs | ||
| include/onnxruntime/core | ||
| java | ||
| js | ||
| objectivec | ||
| onnxruntime | ||
| orttraining | ||
| rust | ||
| samples | ||
| tools | ||
| winml | ||
| .clang-format | ||
| .clang-tidy | ||
| .dockerignore | ||
| .gitattributes | ||
| .gitignore | ||
| .gitmodules | ||
| .lintrunner.toml | ||
| build.bat | ||
| build.sh | ||
| build_arm64x.bat | ||
| CITATION.cff | ||
| CODEOWNERS | ||
| CONTRIBUTING.md | ||
| CPPLINT.cfg | ||
| lgtm.yml | ||
| LICENSE | ||
| NuGet.config | ||
| ort.wprp | ||
| ORT_icon_for_light_bg.png | ||
| packages.config | ||
| pyproject.toml | ||
| README.md | ||
| requirements-dev.txt | ||
| requirements-doc.txt | ||
| requirements-lintrunner.txt | ||
| requirements-training.txt | ||
| requirements.txt | ||
| SECURITY.md | ||
| setup.py | ||
| ThirdPartyNotices.txt | ||
| VERSION_NUMBER | ||

ONNX Runtime is a cross-platform inference and training machine-learning accelerator.
ONNX Runtime inference can enable faster customer experiences and lower costs, supporting models from deep learning frameworks such as PyTorch and TensorFlow/Keras as well as classical machine learning libraries such as scikit-learn, LightGBM, XGBoost, etc. ONNX Runtime is compatible with different hardware, drivers, and operating systems, and provides optimal performance by leveraging hardware accelerators where applicable alongside graph optimizations and transforms. Learn more →
ONNX Runtime training can accelerate the model training time on multi-node NVIDIA GPUs for transformer models with a one-line addition for existing PyTorch training scripts. Learn more →
Get Started & Resources
-
General Information: onnxruntime.ai
-
Usage documentation and tutorials: onnxruntime.ai/docs
-
YouTube video tutorials: youtube.com/@ONNXRuntime
-
Companion sample repositories:
- ONNX Runtime Inferencing: microsoft/onnxruntime-inference-examples
- ONNX Runtime Training: microsoft/onnxruntime-training-examples
Builtin Pipeline Status
| System | Inference | Training |
|---|---|---|
| Windows | ||
| Linux | ||
| Mac | ||
| Android | ||
| iOS | ||
| Web | ||
| Other |
This project is tested with BrowserStack.
Third-party Pipeline Status
| System | Inference | Training |
|---|---|---|
| Linux |
Releases
The current release and past releases can be found here: https://github.com/microsoft/onnxruntime/releases.
For details on the upcoming release, including release dates, announcements, features, and guidance on submitting feature requests, please visit the release roadmap: https://onnxruntime.ai/roadmap.
Data/Telemetry
Windows distributions of this project may collect usage data and send it to Microsoft to help improve our products and services. See the privacy statement for more details.
Contributions and Feedback
We welcome contributions! Please see the contribution guidelines.
For feature requests or bug reports, please file a GitHub Issue.
For general discussion or questions, please use GitHub Discussions.
Code of Conduct
This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.
License
This project is licensed under the MIT License.