ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
Find a file
Jian Chen 527e006124
Update mlas (#15228)
### Description
<!-- Describe your changes. -->



### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
2023-03-27 14:18:48 -07:00
.config Update tsaoptions.json: update the email alias (#13448) 2022-10-26 15:56:16 -07:00
.devcontainer Remove two lines in the Dockerfile for Github Codespace (#12278) 2022-07-21 20:52:17 -07:00
.gdn
.github Run rustfmt in CI (#15217) 2023-03-27 08:12:59 -07:00
.pipelines use python 3.9.7 in windowai packaging pipeline (#14766) 2023-02-23 09:48:42 +08:00
.vscode cpplint & Eager mode: refactor and add comments to empty_* functions, general lint cleanup in ort_aten (#12238) 2022-07-20 11:47:57 -04:00
cgmanifests Adopt linrtunner as the linting tool - take 2 (#15085) 2023-03-24 15:29:03 -07:00
cmake Update mlas (#15228) 2023-03-27 14:18:48 -07:00
csharp Add GetVersionSting API for C++, C# and Python (#14873) 2023-03-02 17:11:07 -08:00
dockerfiles fix TRT dockerfile documentation https://github.com/microsoft/onnxruntime/issues/14556 (#14600) 2023-03-01 07:02:42 -08:00
docs [DML EP] Add GroupNorm (#15189) 2023-03-27 12:52:53 -07:00
include/onnxruntime/core Re-work OrtApi struct to satisfy C++20 compilers (#15183) 2023-03-24 13:52:17 -07:00
java Update Gradle version (#14862) 2023-03-08 12:22:06 -08:00
js Bump webpack from 5.75.0 to 5.76.0 in /js (#15159) 2023-03-23 15:17:52 -07:00
objectivec Objective-C lib: Added support for int64 and uint64. (#14405) 2023-02-24 23:25:16 -08:00
onnxruntime Fixed some warnings that were treated as errors when compiling with D… (#15157) 2023-03-27 14:17:28 -07:00
orttraining Adopt linrtunner as the linting tool - take 2 (#15085) 2023-03-24 15:29:03 -07:00
package/rpm Bump ORT version number (#14226) 2023-01-26 12:33:47 -08:00
rust Add rust bindings (#12606) 2023-02-08 14:57:15 -08:00
samples Adopt linrtunner as the linting tool - take 2 (#15085) 2023-03-24 15:29:03 -07:00
tools Move Linux CPU pipelines to an AMD CPU pool which is cheaper (#15144) 2023-03-27 14:10:08 -07:00
winml remove device_id parameter out of ExecutionProvider::GetAllocator() (#14580) 2023-02-13 10:01:07 -08:00
.clang-format
.clang-tidy Create clang-tidy CI (#12653) 2022-09-30 08:05:38 -07:00
.dockerignore
.gitattributes
.gitignore Update Gradle version (#14862) 2023-03-08 12:22:06 -08:00
.gitmodules Remove protobuf submodule (#15190) 2023-03-27 10:35:49 -07:00
.lintrunner.toml Run rustfmt in CI (#15217) 2023-03-27 08:12:59 -07:00
build.amd64.1411.bat
build.bat
build.sh
CITATION.cff Fix CITATION.cff and add automatic validation of your citation metadata (#10478) 2022-04-13 10:03:52 -07:00
CODEOWNERS Update CODEOWNERS file. 2023-03-07 17:56:37 -08:00
CONTRIBUTING.md Fix link to High Level Design (#11786) 2023-02-28 11:05:54 -08:00
lgtm.yml Fix lgtm C++ error (#13613) 2022-11-10 10:06:22 -08:00
LICENSE
NuGet.config
ort.wprp
ORT_icon_for_light_bg.png Update nuget icon (#10672) 2022-03-01 09:11:03 -08:00
packages.config [DML EP] Upgrade DML to 1.10.1 (#14433) 2023-01-25 21:07:10 -08:00
pyproject.toml Adopt linrtunner as the linting tool - take 2 (#15085) 2023-03-24 15:29:03 -07:00
README.md [Readme] Update table for build pipelines (#14618) 2023-02-08 09:44:20 -08:00
requirements-dev.txt Introduce parameterized as a dev dependency (#11364) 2022-04-26 17:24:39 -07:00
requirements-doc.txt
requirements-training.txt Remove protobuf pin from training requirements (#13695) 2022-11-22 12:27:18 -08:00
requirements.txt.in Add additional python requirements (#11522) 2022-05-20 16:16:18 -07:00
SECURITY.md Microsoft mandatory file (#11619) 2022-05-25 13:56:10 -07:00
setup.py Adopt linrtunner as the linting tool - take 2 (#15085) 2023-03-24 15:29:03 -07:00
ThirdPartyNotices.txt Revert mimalloc from v2.0.9 to v2.0.3 (#14603) 2023-02-07 09:58:25 -08:00
VERSION_NUMBER Bump ORT version number (#14226) 2023-01-26 12:33:47 -08:00

ONNX Runtime is a cross-platform inference and training machine-learning accelerator.

ONNX Runtime inference can enable faster customer experiences and lower costs, supporting models from deep learning frameworks such as PyTorch and TensorFlow/Keras as well as classical machine learning libraries such as scikit-learn, LightGBM, XGBoost, etc. ONNX Runtime is compatible with different hardware, drivers, and operating systems, and provides optimal performance by leveraging hardware accelerators where applicable alongside graph optimizations and transforms. Learn more →

ONNX Runtime training can accelerate the model training time on multi-node NVIDIA GPUs for transformer models with a one-line addition for existing PyTorch training scripts. Learn more →

Get Started & Resources

Build Pipeline Status

System Inference Training
Windows Build Status
Build Status
Build Status
Linux Build Status
Build Status
Build Status
Build Status
Build Status
Build Status
Build Status
Build Status
Mac Build Status
Android Build Status
iOS Build Status
Web Build Status
Other Build Status
Build Status

Data/Telemetry

Windows distributions of this project may collect usage data and send it to Microsoft to help improve our products and services. See the privacy statement for more details.

Contributions and Feedback

We welcome contributions! Please see the contribution guidelines.

For feature requests or bug reports, please file a GitHub Issue.

For general discussion or questions, please use GitHub Discussions.

Code of Conduct

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.

License

This project is licensed under the MIT License.