ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
Find a file
Enrico Galli 1e5bda88f0
[WebNN EP] Cache MLTensors between runs (#22278)
### Description
This change enables caching `MLTensor`s between inferences runs. This is
done by keeping a reference to `MLTensor`s alive after they have been
released. `MLTensor`s are only destroyed once the sessions goes out of
scope.

### Motivation and Context
Creating and destroying `MTensor`s on every run has a non-trivial
performance penalty. This performance penalty materializes when using
`ort.Tensors`[location=cpu] for inputs/outputs or when using the CPU EP
as a fallback EP for unsupported operators. The former could be
mitigated by developer using `ort.Tensors`[location=ml-tensor]. The
latter cannot be mitigated by developers.
2024-10-18 08:07:00 -07:00
.config
.devcontainer
.gdn
.github Move suggest fixes to a separate CI workflow (#22415) 2024-10-14 10:26:37 -07:00
.pipelines [DML EP] Update DML to 1.15.2 (#22247) 2024-09-27 13:20:29 -07:00
.vscode Stop VSCode appending file associations to settings.json (#21944) 2024-08-31 19:04:12 -07:00
cgmanifests Upgrade absl to the latest released version (#22365) 2024-10-09 20:21:40 -07:00
cmake [ROCm] prefer hip interfaces over roc during hipify (#22394) 2024-10-14 20:34:03 -07:00
csharp bumps up version in main from 1.20 -> 1.21 (#22482) 2024-10-17 12:32:35 -07:00
dockerfiles [CUDA] Add CUDA_VERSION and CUDNN_VERSION etc. arguments to Dockerfile.cuda (#22351) 2024-10-09 12:06:33 -07:00
docs bumps up version in main from 1.20 -> 1.21 (#22482) 2024-10-17 12:32:35 -07:00
include/onnxruntime/core bumps up version in main from 1.20 -> 1.21 (#22482) 2024-10-17 12:32:35 -07:00
java [CoreML ML Program] support acclerators selector (#22383) 2024-10-15 11:50:11 +08:00
js [WebNN EP] Cache MLTensors between runs (#22278) 2024-10-18 08:07:00 -07:00
objectivec [CoreML ML Program] support acclerators selector (#22383) 2024-10-15 11:50:11 +08:00
onnxruntime fix LayerNorm f16 CPU implementation (#22479) 2024-10-17 18:49:38 -07:00
orttraining Fix training artifacts for 2GB+ models and MSELoss (#22414) 2024-10-15 16:47:16 -07:00
rust Fix typos according to reviewdog report. (#21335) 2024-07-22 13:37:32 -07:00
samples
tools Add onnxtestdata cache for win-web-multi-browsers pipeline (#22477) 2024-10-17 12:03:29 -07:00
winml Fix warnings (#21809) 2024-08-21 14:23:37 -07:00
.clang-format
.clang-tidy
.dockerignore
.gitattributes Fix typos according to reviewdog report. (#21335) 2024-07-22 13:37:32 -07:00
.gitignore
.gitmodules Revert "Upgrade emsdk from 3.1.59 to 3.1.62" (#21817) 2024-08-22 11:21:00 -07:00
.lintrunner.toml [js] change default formatter for JavaScript/TypeScript from clang-format to Prettier (#21728) 2024-08-14 16:51:22 -07:00
build.bat
build.sh
build_arm64x.bat
CITATION.cff
CODEOWNERS
CONTRIBUTING.md
lgtm.yml
LICENSE
NuGet.config Update C# test projects (#21631) 2024-09-05 08:21:23 +10:00
ort.wprp Fully dynamic ETW controlled logging for ORT and QNN logs (#20537) 2024-06-06 21:11:14 -07:00
ORT_icon_for_light_bg.png
packages.config [DML EP] Update DML to 1.15.2 (#22247) 2024-09-27 13:20:29 -07:00
pyproject.toml Ignore ruff rule N813 (#21477) 2024-07-24 17:48:22 -07:00
README.md Add BrowserStack mention to project ReadMe (#22207) 2024-09-24 17:14:14 -07:00
requirements-dev.txt
requirements-doc.txt
requirements-lintrunner.txt Update lintrunner requirements (#22185) 2024-09-23 18:27:16 -07:00
requirements-training.txt
requirements.txt Add compatibility for NumPy 2.0 (#21085) 2024-06-27 13:50:53 -07:00
SECURITY.md
setup.py Update CMake to 3.31.0rc1 (#22433) 2024-10-16 11:50:13 -07:00
ThirdPartyNotices.txt Fix typos according to reviewdog report. (#21335) 2024-07-22 13:37:32 -07:00
VERSION_NUMBER bumps up version in main from 1.20 -> 1.21 (#22482) 2024-10-17 12:32:35 -07:00

ONNX Runtime is a cross-platform inference and training machine-learning accelerator.

ONNX Runtime inference can enable faster customer experiences and lower costs, supporting models from deep learning frameworks such as PyTorch and TensorFlow/Keras as well as classical machine learning libraries such as scikit-learn, LightGBM, XGBoost, etc. ONNX Runtime is compatible with different hardware, drivers, and operating systems, and provides optimal performance by leveraging hardware accelerators where applicable alongside graph optimizations and transforms. Learn more →

ONNX Runtime training can accelerate the model training time on multi-node NVIDIA GPUs for transformer models with a one-line addition for existing PyTorch training scripts. Learn more →

Get Started & Resources

Builtin Pipeline Status

System Inference Training
Windows Build Status
Build Status
Build Status
Linux Build Status
Build Status
Build Status
Build Status
Build Status
Build Status
Build Status
Build Status
Mac Build Status
Android Build Status
iOS Build Status
Web Build Status
Other Build Status

This project is tested with BrowserStack.

Third-party Pipeline Status

System Inference Training
Linux Build Status

Data/Telemetry

Windows distributions of this project may collect usage data and send it to Microsoft to help improve our products and services. See the privacy statement for more details.

Contributions and Feedback

We welcome contributions! Please see the contribution guidelines.

For feature requests or bug reports, please file a GitHub Issue.

For general discussion or questions, please use GitHub Discussions.

Code of Conduct

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.

License

This project is licensed under the MIT License.