ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
Find a file
Rachel Guo 9a44a69653
Refactor NNAPI EP OpBuilder/OpSupportChecker structure (#13065)
### Description
<!-- Describe your changes. -->

As title

-Split long OpBuilder and OpSupportChecker files into individual
operator files.

-Add OpBuilder/SupportChecker registry factories.

-Combine the functionality of op_builder and op_support_checker into one
op_builder.
### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->

The NNAPI OPBuilder was splitted into OPBuilder (For EP::Compile) and
OPSupportChecker (for EP::GetCapability)
At the time it was reasonable choice, but OPBuilder/OPSupportChecker
share some logic and has to use addition helper.

Clean up now to make NNAPI OPBuilder/OPSupportChecker into single
OPBuilder (similar to what CoreML EP has)
2022-09-27 17:12:09 -07:00
.config
.devcontainer Remove two lines in the Dockerfile for Github Codespace (#12278) 2022-07-21 20:52:17 -07:00
.gdn
.github Labeler: Test /i regex for case sensitivity (#13115) 2022-09-27 13:58:09 -07:00
.pipelines Publish WinML Nuget package to ORT-Nightly ADO feed (#12904) 2022-09-15 12:10:27 -07:00
.vscode cpplint & Eager mode: refactor and add comments to empty_* functions, general lint cleanup in ort_aten (#12238) 2022-07-20 11:47:57 -04:00
cgmanifests Upgrade protobuf version (#13100) 2022-09-26 21:30:28 -07:00
cmake Refactor NNAPI EP OpBuilder/OpSupportChecker structure (#13065) 2022-09-27 17:12:09 -07:00
csharp Remove miscellaneous nuphar configs (#13070) 2022-09-26 13:41:28 -07:00
dockerfiles Remove miscellaneous nuphar configs (#13070) 2022-09-26 13:41:28 -07:00
docs QuantizeBFP and DequantizeBFP (#12833) 2022-09-22 14:02:55 -07:00
include/onnxruntime/core Consolidate enabled/default kernel def type constraints (#13034) 2022-09-27 14:04:15 -07:00
java Drop nuphar from java API (#13107) 2022-09-26 17:06:08 -07:00
js Update React Native documentation to reflect change to use full ORT (#13091) 2022-09-28 08:11:58 +10:00
objectivec Update kernel matching logic: decouple from op schemas and remove kernel def hashes (#12791) 2022-09-20 14:24:59 -07:00
onnxruntime Refactor NNAPI EP OpBuilder/OpSupportChecker structure (#13065) 2022-09-27 17:12:09 -07:00
orttraining Bugfix for SimplifiedLayerNormalization (#12975) 2022-09-27 14:24:16 +08:00
package/rpm Bump ort version number (#11948) 2022-07-22 12:55:53 -07:00
samples Format all python files under onnxruntime with black and isort (#11324) 2022-04-26 09:35:16 -07:00
tools Fix OLive build pipeline (#13114) 2022-09-27 10:19:58 -07:00
winml Update kernel matching logic: decouple from op schemas and remove kernel def hashes (#12791) 2022-09-20 14:24:59 -07:00
.clang-format
.clang-tidy
.dockerignore
.flake8 Remove miscellaneous nuphar configs (#13070) 2022-09-26 13:41:28 -07:00
.gitattributes
.gitignore Ignore settings.json in git (#12988) 2022-09-19 12:05:43 -07:00
.gitmodules upgrade emsdk to 3.1.19 (#12690) 2022-08-30 13:42:45 -07:00
build.amd64.1411.bat
build.bat
build.sh
CITATION.cff Fix CITATION.cff and add automatic validation of your citation metadata (#10478) 2022-04-13 10:03:52 -07:00
CODEOWNERS Add cgmanifest file in codeowner list (#13042) 2022-09-22 18:58:01 -07:00
CONTRIBUTING.md minor improvements to CONTRIBUTING doc (#11080) 2022-04-12 15:22:34 -07:00
lgtm.yml Add LGTM config for c++ and c# (#11365) 2022-04-27 10:51:40 -07:00
LICENSE
NuGet.config
ort.wprp
ORT_icon_for_light_bg.png
packages.config Update DML 1.9.0 to 1.9.1 (#12966) 2022-09-15 10:54:22 -07:00
pyproject.toml Reduce CI noise from Python lint (#12270) 2022-07-27 13:42:29 -07:00
README.md Remove miscellaneous nuphar configs (#13070) 2022-09-26 13:41:28 -07:00
requirements-dev.txt Introduce parameterized as a dev dependency (#11364) 2022-04-26 17:24:39 -07:00
requirements-doc.txt
requirements-training.txt pin protobuf version to be compatible with onnx (#12132) 2022-07-08 15:01:27 -07:00
requirements.txt.in Add additional python requirements (#11522) 2022-05-20 16:16:18 -07:00
SECURITY.md Microsoft mandatory file (#11619) 2022-05-25 13:56:10 -07:00
setup.py Remove miscellaneous nuphar configs (#13070) 2022-09-26 13:41:28 -07:00
ThirdPartyNotices.txt
VERSION_NUMBER Bump ort version number (#11948) 2022-07-22 12:55:53 -07:00

ONNX Runtime is a cross-platform inference and training machine-learning accelerator.

ONNX Runtime inference can enable faster customer experiences and lower costs, supporting models from deep learning frameworks such as PyTorch and TensorFlow/Keras as well as classical machine learning libraries such as scikit-learn, LightGBM, XGBoost, etc. ONNX Runtime is compatible with different hardware, drivers, and operating systems, and provides optimal performance by leveraging hardware accelerators where applicable alongside graph optimizations and transforms. Learn more →

ONNX Runtime training can accelerate the model training time on multi-node NVIDIA GPUs for transformer models with a one-line addition for existing PyTorch training scripts. Learn more →

Get Started

General Information: onnxruntime.ai

Usage documention and tutorials: onnxruntime.ai/docs

Companion sample repositories:

Build Pipeline Status

System CPU GPU EPs
Windows Build Status Build Status Build Status
Linux Build Status
Build Status
Build Status
Build Status
Build Status
Build Status
Build Status
Build Status
Build Status
Build Status
Mac Build Status
Build Status
Android Build Status
iOS Build Status
WebAssembly Build Status

Data/Telemetry

Windows distributions of this project may collect usage data and send it to Microsoft to help improve our products and services. See the privacy statement for more details.

Contributions and Feedback

We welcome contributions! Please see the contribution guidelines.

For feature requests or bug reports, please file a GitHub Issue.

For general discussion or questions, please use GitHub Discussions.

Code of Conduct

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.

License

This project is licensed under the MIT License.