ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
Find a file
Yufeng Li 8de885fdb1
reduce cuda library binary size (#14555)
### Description
Reduce the cuda library size by:
1. refactoring beam_search_top_k to reduce template instantiation. It
saves ~56MB
2. opt out TopK for type uint*, int8_t and int16_t. It saves ~50MB.


### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
2023-02-07 09:03:14 -08:00
.config Update tsaoptions.json: update the email alias (#13448) 2022-10-26 15:56:16 -07:00
.devcontainer Remove two lines in the Dockerfile for Github Codespace (#12278) 2022-07-21 20:52:17 -07:00
.gdn
.github Upgrade doxygen to fix C API docs build issue (#13950) 2023-02-03 09:43:29 -08:00
.pipelines try VS 2022 in windowsAI pipeline (#14608) 2023-02-07 17:53:53 +08:00
.vscode cpplint & Eager mode: refactor and add comments to empty_* functions, general lint cleanup in ort_aten (#12238) 2022-07-20 11:47:57 -04:00
cgmanifests Specify deps in deps.txt and manifest (#14530) 2023-02-02 09:44:57 -08:00
cmake reduce cuda library binary size (#14555) 2023-02-07 09:03:14 -08:00
csharp GetTrainingApi to not print to stderr when not an ort training build (#14515) 2023-02-02 13:28:32 -08:00
dockerfiles [Build] Fix arm64 Docker build (#14283) 2023-01-30 16:25:19 -08:00
docs reduce cuda library binary size (#14555) 2023-02-07 09:03:14 -08:00
include/onnxruntime/core Upgrade doxygen to fix C API docs build issue (#13950) 2023-02-03 09:43:29 -08:00
java [oneDNN] Improved thread handling (#13618) 2023-01-31 14:37:13 -08:00
js Bump jszip from 3.7.1 to 3.8.0 in /js/web (#14536) 2023-02-07 01:38:00 +00:00
objectivec [objc] Fix parameter name in documentation. (#14330) 2023-01-18 16:54:59 -08:00
onnxruntime reduce cuda library binary size (#14555) 2023-02-07 09:03:14 -08:00
orttraining [ORTModule] ATen Support for upsample_bilinear (#14519) 2023-02-04 15:20:18 +08:00
package/rpm Bump ORT version number (#14226) 2023-01-26 12:33:47 -08:00
samples
test Multi-stream execution support (#13495) 2022-12-15 07:39:29 -08:00
tools [ROCm] Enable Sampling Op UT on AMD (#14581) 2023-02-06 20:52:06 -08:00
winml Enabling thread pool to be numa-aware (#13778) 2022-12-12 10:33:55 -08:00
.clang-format
.clang-tidy Create clang-tidy CI (#12653) 2022-09-30 08:05:38 -07:00
.dockerignore
.flake8 Remove miscellaneous nuphar configs (#13070) 2022-09-26 13:41:28 -07:00
.gitattributes
.gitignore Ignore more build directories and clangd files (#14154) 2023-01-07 06:58:57 +08:00
.gitmodules Remove unused git submodules (#13830) 2022-12-07 21:59:16 -08:00
build.amd64.1411.bat
build.bat
build.sh
CITATION.cff
CODEOWNERS Add cgmanifest file in codeowner list (#13042) 2022-09-22 18:58:01 -07:00
CONTRIBUTING.md Fix broken link (#14368) 2023-01-20 15:55:03 -08:00
lgtm.yml Fix lgtm C++ error (#13613) 2022-11-10 10:06:22 -08:00
LICENSE
NuGet.config
ort.wprp
ORT_icon_for_light_bg.png
packages.config [DML EP] Upgrade DML to 1.10.1 (#14433) 2023-01-25 21:07:10 -08:00
pyproject.toml Update pylint config to include valid short names (#13631) 2022-11-14 10:00:25 -08:00
README.md Update resource section in readme (#13724) 2022-11-28 09:42:31 -08:00
requirements-dev.txt
requirements-doc.txt
requirements-training.txt Remove protobuf pin from training requirements (#13695) 2022-11-22 12:27:18 -08:00
requirements.txt.in Add additional python requirements (#11522) 2022-05-20 16:16:18 -07:00
SECURITY.md Microsoft mandatory file (#11619) 2022-05-25 13:56:10 -07:00
setup.py Stable Diffusion CUDA optimizations Part 2 (#14597) 2023-02-07 07:49:15 -08:00
ThirdPartyNotices.txt Specify deps in deps.txt and manifest (#14530) 2023-02-02 09:44:57 -08:00
VERSION_NUMBER Bump ORT version number (#14226) 2023-01-26 12:33:47 -08:00

ONNX Runtime is a cross-platform inference and training machine-learning accelerator.

ONNX Runtime inference can enable faster customer experiences and lower costs, supporting models from deep learning frameworks such as PyTorch and TensorFlow/Keras as well as classical machine learning libraries such as scikit-learn, LightGBM, XGBoost, etc. ONNX Runtime is compatible with different hardware, drivers, and operating systems, and provides optimal performance by leveraging hardware accelerators where applicable alongside graph optimizations and transforms. Learn more →

ONNX Runtime training can accelerate the model training time on multi-node NVIDIA GPUs for transformer models with a one-line addition for existing PyTorch training scripts. Learn more →

Get Started & Resources

Build Pipeline Status

System CPU GPU EPs
Windows Build Status Build Status Build Status
Linux Build Status
Build Status
Build Status
Build Status
Build Status
Build Status
Build Status
Build Status
Build Status
Build Status
Mac Build Status
Build Status
Android Build Status
iOS Build Status
WebAssembly Build Status

Data/Telemetry

Windows distributions of this project may collect usage data and send it to Microsoft to help improve our products and services. See the privacy statement for more details.

Contributions and Feedback

We welcome contributions! Please see the contribution guidelines.

For feature requests or bug reports, please file a GitHub Issue.

For general discussion or questions, please use GitHub Discussions.

Code of Conduct

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.

License

This project is licensed under the MIT License.