* Update to flatbuffers v2.0.0 (#10866)
* Fix Reduced ops pipeline (#10861)
* Fix a couple of issues with the python package tools (#10858)
* Tweaks to the model utils
* Add handling for a dim_value of -1 when replacing the entire input shape. This occurs in models exported from PaddlePaddle
* make pytorch helpers accessible in package
* make QDQ helpers accessible in package
* Fix wrong percentile values returned during calibration (#10847)
* Use numpy.percentile to get the lookup value.
* Use 1.0 as float value rather than integer.
* Add missing cdf parameter for `np.percentile`.
* Use 100. instead of 1.0
* Remove print.
* Update from @yufenglee
* Add support for opset 16 to transpose optimizer. (#10841)
* Add support for opset 16 to transpose optimizer.
Only change required is for GridSample to be added to the layout sensitive ops. The existing handling for layout transpose works with that as the first input and first output are layout sensitive.
Update the optimize to be able to return an error message if it fails.
* Use separate build directories for full and mobile iOS packages. (#10835)
* Address performance issue with abseil flat_hash_table. (#10819)
When returning by value in a cross DLL call, the hash table
even though containing all the entries that are originally there
can not find at least some of them. Reverting to std::unordered_set
pending further investigation.
* Mark end of version 11 C API. (#10803)
* Mark end of version 11 C API
* Add static_assert
* avoid using LocalFree on FormatMessageW buffer (#10796)
* remove local free
* Remove local free from onnxruntime
* don't allocate
* Change to use constexpr to satisfy CPU build warning
* Integrate C-API tests into Pipelines for release packages (#10794)
* add c-api test for package
* fix bug for running c-api test for package
* refine run application script
* remove redundant code
* include CUDA test
* Remove testing CUDA EP temporarily
* fix bug
* Code refactor
* try to fix YAML bug
* try to fix YAML bug
* try to fix YAML bug
* fix bug for multiple directories in Pipelines
* fix bug
* add comments and fix bug
* Update c-api-noopenmp-packaging-pipelines.yml
* Remove failOnStandardError flag in Pipelines
* Detect runtime CUDA JIT and warn the user (#10781)
* Use cudaMalloc vs cudaDeviceSynchronize and show the total time
* Update convert_onnx_models_to_ort.py to support runtime optimizations. (#10765)
Add runtime optimization support to ONNX -> ORT format conversion script.
Replace `--optimization_level`, `--use_nnapi`, and `--use_coreml` with a new `--optimization_style` option.
* Add multithreading test and put a lock on nvinfer1::createInferRuntime() for TRT EP (#10714)
* Add multithread unit test and put lock on library call
* update code
* remove debug code
* add comment
* add one session multi-threads inference
* Put lock for build engine all the time
* Update naming and comment
* remove unnecessary lock
* Revert "remove unnecessary lock"
This reverts commit 9c2317b1d2273dec0ebdeb52160bc757839e5edc.
* Fix handling of nodes inserted by NHWC transformer. (#10904) (#10925)
* Revert "Upsample support NHWC (#10554)" (#10917)
This reverts commit
|
||
|---|---|---|
| .config | ||
| .gdn | ||
| .github | ||
| .pipelines | ||
| cgmanifests | ||
| cmake | ||
| csharp | ||
| dockerfiles | ||
| docs | ||
| include/onnxruntime/core | ||
| java | ||
| js | ||
| objectivec | ||
| onnxruntime | ||
| orttraining | ||
| package/rpm | ||
| samples | ||
| server | ||
| tools | ||
| winml | ||
| .clang-format | ||
| .clang-tidy | ||
| .dockerignore | ||
| .flake8 | ||
| .gitattributes | ||
| .gitignore | ||
| .gitmodules | ||
| build.amd64.1411.bat | ||
| build.bat | ||
| build.sh | ||
| CITATION.cff | ||
| CODEOWNERS | ||
| CONTRIBUTING.md | ||
| LICENSE | ||
| NuGet.config | ||
| ort.wprp | ||
| ORT_icon_for_light_bg.png | ||
| packages.config | ||
| README.md | ||
| requirements-dev.txt | ||
| requirements-doc.txt | ||
| requirements-training.txt | ||
| requirements.txt.in | ||
| setup.py | ||
| ThirdPartyNotices.txt | ||
| VERSION_NUMBER | ||

ONNX Runtime is a cross-platform inference and training machine-learning accelerator.
ONNX Runtime inference can enable faster customer experiences and lower costs, supporting models from deep learning frameworks such as PyTorch and TensorFlow/Keras as well as classical machine learning libraries such as scikit-learn, LightGBM, XGBoost, etc. ONNX Runtime is compatible with different hardware, drivers, and operating systems, and provides optimal performance by leveraging hardware accelerators where applicable alongside graph optimizations and transforms. Learn more →
ONNX Runtime training can accelerate the model training time on multi-node NVIDIA GPUs for transformer models with a one-line addition for existing PyTorch training scripts. Learn more →
Get Started
General Information: onnxruntime.ai
Usage documention and tutorials: onnxruntime.ai/docs
Companion sample repositories:
- ONNX Runtime Inferencing: microsoft/onnxruntime-inference-examples
- ONNX Runtime Training: microsoft/onnxruntime-training-examples
Build Pipeline Status
| System | CPU | GPU | EPs |
|---|---|---|---|
| Windows | |||
| Linux | |||
| Mac | |||
| Android | |||
| iOS | |||
| WebAssembly |
Data/Telemetry
Windows distributions of this project may collect usage data and send it to Microsoft to help improve our products and services. See the privacy statement for more details.
Contributions and Feedback
We welcome contributions! Please see the contribution guidelines.
For feature requests or bug reports, please file a GitHub Issue.
For general discussion or questions, please use GitHub Discussions.
Code of Conduct
This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.
License
This project is licensed under the MIT License.