ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
Find a file
Yulong Wang da532f3f5a
[js/webgpu] fix GPU to GPU memcpy (#16393)
### Description
Fixes a GPU to GPU memory copy bug which causes #16267
2023-06-21 15:50:08 -07:00
.config
.devcontainer
.gdn Update win-ci-pipeline.yml: enable xnnpack tests (#16244) 2023-06-14 19:12:42 -07:00
.github Update win-ci-pipeline.yml: enable xnnpack tests (#16244) 2023-06-14 19:12:42 -07:00
.pipelines
.vscode
cgmanifests Clean AzureEP logics (#16367) 2023-06-21 09:38:52 -07:00
cmake Clean AzureEP logics (#16367) 2023-06-21 09:38:52 -07:00
csharp Enable Microsoft.AI.MachineLearning NuGet with WinUI projects (#16415) 2023-06-20 13:10:19 -07:00
dockerfiles Enable model subgraph execution in OVEP and setting the OpenVINO dll's to the path from the OpenVINO pypi packge in OVEP and fix OVEP windows io buffer sample (#16147) 2023-06-16 19:47:09 -07:00
docs Embedding sparsity optimization (#16141) 2023-06-19 20:34:53 +08:00
include/onnxruntime/core CUDA graph support for TRT EP (#16081) 2023-06-21 09:36:45 -07:00
java
js [js/webgpu] fix GPU to GPU memcpy (#16393) 2023-06-21 15:50:08 -07:00
objectivec Treat Objective-C static analysis warnings as errors (#16293) 2023-06-09 08:51:49 -07:00
onnxruntime Add license header to CUDA related files (#16437) 2023-06-21 13:31:43 -07:00
orttraining Move tests from core/providers/cuda/test/* to test/providers/cuda/ and refactor CUDA UT (#16161) 2023-06-20 14:54:55 -07:00
rust
samples
swift/OnnxRuntimeBindingsTests
tools [NNAPI doc] add reducemean to supported op list (#16414) 2023-06-21 00:29:20 -07:00
winml Use M_PI to replace 3.14 constants (#16421) 2023-06-20 15:09:10 -07:00
.clang-format
.clang-tidy
.dockerignore
.gitattributes
.gitignore
.gitmodules
.lintrunner.toml
build.amd64.1411.bat
build.bat
build.sh
CITATION.cff
CODEOWNERS
CONTRIBUTING.md
lgtm.yml
LICENSE
NuGet.config
ort.wprp
ORT_icon_for_light_bg.png
Package.swift
packages.config
pyproject.toml
README.md
requirements-dev.txt
requirements-doc.txt
requirements-lintrunner.txt
requirements-training.txt
requirements.txt.in
SECURITY.md
setup.py Clean AzureEP logics (#16367) 2023-06-21 09:38:52 -07:00
ThirdPartyNotices.txt
VERSION_NUMBER

ONNX Runtime is a cross-platform inference and training machine-learning accelerator.

ONNX Runtime inference can enable faster customer experiences and lower costs, supporting models from deep learning frameworks such as PyTorch and TensorFlow/Keras as well as classical machine learning libraries such as scikit-learn, LightGBM, XGBoost, etc. ONNX Runtime is compatible with different hardware, drivers, and operating systems, and provides optimal performance by leveraging hardware accelerators where applicable alongside graph optimizations and transforms. Learn more →

ONNX Runtime training can accelerate the model training time on multi-node NVIDIA GPUs for transformer models with a one-line addition for existing PyTorch training scripts. Learn more →

Get Started & Resources

Builtin Pipeline Status

System Inference Training
Windows Build Status
Build Status
Build Status
Linux Build Status
Build Status
Build Status
Build Status
Build Status
Build Status
Build Status
Build Status
Mac Build Status
Android Build Status
iOS Build Status
Web Build Status
Other Build Status
Build Status

Third-party Pipeline Status

System Inference Training
Linux Build Status

Data/Telemetry

Windows distributions of this project may collect usage data and send it to Microsoft to help improve our products and services. See the privacy statement for more details.

Contributions and Feedback

We welcome contributions! Please see the contribution guidelines.

For feature requests or bug reports, please file a GitHub Issue.

For general discussion or questions, please use GitHub Discussions.

Code of Conduct

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.

License

This project is licensed under the MIT License.