ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
Find a file
Adrian Lizarraga 5a83a67f32
Support QDQ transformations with com.microsoft.Quantize/Dequantize ops (#17127)
### Description
- Enables int32 support for com.microsoft.DequantizeLinear (contrib op)
- Makes the `zero_point` input optional for Quantize/Dequantize contrib
ops
- Enables QDQ transformations with the Quantize/Dequantize contrib ops
- Update tests: EnsureUniqueDQForNodeUnitTests, QDQTransformerTests,
TransposeOptimizerTests

### Testing
List of tested graph transformations:
- [x] QDQSelectorActionTransformer
  - qdq_transformer_test.cc
- [x] QDQS8ToU8Transformer
  - qdq_transformer_test.cc
- [x] DoubleQDQPairsRemover
  - qdq_transformer_test.cc
- [x] IdenticalChildrenConsolidation
  - qdq_transformer_test.cc
- [x] QDQPropagation
  - qdq_transformer_test.cc
- [x] QDQFinalCleanup
  - qdq_transformer_test.cc
- [x] CliQuantFusion
  - qdq_transformer_test.cc
- [x] ReluQuantFusion
  - qdq_transformer_test.cc
- [x] EnsureUniqueDQForNodeUnit 
  - ensure_unique_dq_for_node_unit_test.cc
- [x] TransposeOptimizer 
  - transpose_optimizer_test.cc
- [x] CommonSubexpressionElimination
  - graph_transform_test.cc
- [x] ConstantFolding
  - graph_transform_test.cc

### Motivation and Context
We need to [support mixed 16-bit/8-bit precision QDQ
models](https://github.com/microsoft/onnxruntime/pull/17015). This PR is
the first step in achieving this goal: we need to make QDQ contrib ops
work with our optimizations/transformations.

---------

Co-authored-by: Edward Chen <18449977+edgchen1@users.noreply.github.com>
Co-authored-by: Scott McKay <skottmckay@gmail.com>
2023-08-25 09:57:51 -07:00
.config
.devcontainer
.gdn Update win-ci-pipeline.yml: enable xnnpack tests (#16244) 2023-06-14 19:12:42 -07:00
.github [Docs] Auto generate JS API (#17271) 2023-08-24 17:35:37 -07:00
.pipelines Bump DirectML version from 1.12.0 to 1.12.1 (#17225) 2023-08-20 09:55:38 -07:00
.vscode Broadcasting for SLN for CPU and CUDA (#16510) 2023-08-07 09:55:42 -07:00
cgmanifests Move composable_kernel to deps.txt (#17245) 2023-08-23 17:39:16 -07:00
cmake [js/webgpu] support error pop and kernel name (#17260) 2023-08-25 08:08:15 -07:00
csharp On-Device Training - Enable loading from buffer (#16417) 2023-08-22 19:59:32 -07:00
dockerfiles Enable model subgraph execution in OVEP and setting the OpenVINO dll's to the path from the OpenVINO pypi packge in OVEP and fix OVEP windows io buffer sample (#16147) 2023-06-16 19:47:09 -07:00
docs Support QDQ transformations with com.microsoft.Quantize/Dequantize ops (#17127) 2023-08-25 09:57:51 -07:00
include/onnxruntime/core Fix build - redefinition of default argument for ‘long unsigned int Extent’ (#17281) 2023-08-25 00:40:40 +08:00
java [java] Relaxing CoreML test (#16777) 2023-08-09 11:43:05 -07:00
js [js/webgpu] support error pop and kernel name (#17260) 2023-08-25 08:08:15 -07:00
objectivec Objective-C Add Support to Create and Query String ORTValues (#16764) 2023-07-20 17:39:29 -07:00
onnxruntime Support QDQ transformations with com.microsoft.Quantize/Dequantize ops (#17127) 2023-08-25 09:57:51 -07:00
orttraining Add support for more ops for padding elimination (#17217) 2023-08-25 18:02:15 +08:00
rust
samples Enable pylint and numpy rules (#15218) 2023-03-27 20:37:53 -07:00
swift/OnnxRuntimeBindingsTests Add iOS Swift Package Manager support (#15297) 2023-04-20 16:18:35 +10:00
tools Readd Tests in Window GPU Reduced Ops workflow (#17294) 2023-08-25 15:56:59 +08:00
winml Improve comments in winml/ (#17163) 2023-08-15 23:30:56 -04:00
.clang-format Prevent GSL_SUPPRESS arguments from being modified by clang-format (#17242) 2023-08-22 18:26:53 -07:00
.clang-tidy
.dockerignore
.gitattributes
.gitignore remove 'lib/' from .gitignore (#15613) 2023-04-24 18:43:32 -07:00
.gitmodules [wasm] upgrade emsdk to 3.1.44 (#17069) 2023-08-10 16:08:36 -07:00
.lintrunner.toml Format c++ code under winml/ (#16660) 2023-07-25 21:56:50 -07:00
build.bat try to find patch.exe in git default installation folder (#17106) 2023-08-10 21:48:13 -07:00
build.sh Upgrade old Python version in packaging pipeline (#16667) 2023-07-17 08:24:47 -07:00
CITATION.cff
CODEOWNERS Add owners for public facing API files (#15288) 2023-03-30 17:16:15 -07:00
CONTRIBUTING.md Fix link to High Level Design (#11786) 2023-02-28 11:05:54 -08:00
lgtm.yml
LICENSE
NuGet.config
ort.wprp
ORT_icon_for_light_bg.png
Package.swift Objective-C Add Support to Create and Query String ORTValues (#16764) 2023-07-20 17:39:29 -07:00
packages.config Bump DirectML version from 1.12.0 to 1.12.1 (#17225) 2023-08-20 09:55:38 -07:00
pyproject.toml Updating QDQ to support Float8E4M3FN (#16550) 2023-08-08 12:18:48 +02:00
README.md add third-party pipeline status to README.md (#16155) 2023-05-31 22:14:39 -07:00
requirements-dev.txt Remove codecov from requirements-dev.txt (#15487) 2023-04-12 18:48:02 -07:00
requirements-doc.txt
requirements-lintrunner.txt Bump clang-format to 16.0.6 in CI (#17099) 2023-08-10 13:53:04 -07:00
requirements-training.txt
requirements.txt.in
SECURITY.md
setup.py Add LLaMA scripts (#17020) 2023-08-22 18:05:11 -07:00
ThirdPartyNotices.txt Support SmoothQuant for ORT static quantization (#16288) 2023-07-26 18:56:45 -07:00
VERSION_NUMBER Update VERSION_NUMBER (#15773) 2023-05-03 15:07:34 -07:00

ONNX Runtime is a cross-platform inference and training machine-learning accelerator.

ONNX Runtime inference can enable faster customer experiences and lower costs, supporting models from deep learning frameworks such as PyTorch and TensorFlow/Keras as well as classical machine learning libraries such as scikit-learn, LightGBM, XGBoost, etc. ONNX Runtime is compatible with different hardware, drivers, and operating systems, and provides optimal performance by leveraging hardware accelerators where applicable alongside graph optimizations and transforms. Learn more →

ONNX Runtime training can accelerate the model training time on multi-node NVIDIA GPUs for transformer models with a one-line addition for existing PyTorch training scripts. Learn more →

Get Started & Resources

Builtin Pipeline Status

System Inference Training
Windows Build Status
Build Status
Build Status
Linux Build Status
Build Status
Build Status
Build Status
Build Status
Build Status
Build Status
Build Status
Mac Build Status
Android Build Status
iOS Build Status
Web Build Status
Other Build Status
Build Status

Third-party Pipeline Status

System Inference Training
Linux Build Status

Data/Telemetry

Windows distributions of this project may collect usage data and send it to Microsoft to help improve our products and services. See the privacy statement for more details.

Contributions and Feedback

We welcome contributions! Please see the contribution guidelines.

For feature requests or bug reports, please file a GitHub Issue.

For general discussion or questions, please use GitHub Discussions.

Code of Conduct

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.

License

This project is licensed under the MIT License.