* move some optimizers to level1
* move matmul add fusion to level 1
* bug fix in the test code
* fix make_uniques + add test exceptions
* add exception for tests in c# too
* Initial draft
* updates per review
* fix link
* plus one more link fix
* small changes to the optimizer documentation
* some more changes
* done
* update C_API with doc link
* script for converting ONNX model for BERT performance optimization
* Remove code that not needed anymore.
* refine the script
* Support BERT model exported from PyTorch 1.3
Keep opset version
Exact match in Attention, Layer normalziation fusions.
* read batch_size from input model directly
* Refine optimizers
* Address PR comments
* Changes from PR comments and discussion.
* Fixed signed/unsigned mismatch
* Address PR comments
* Address PR comments
* Fix linux build
* Fix issue with mkldnn logic.
* Turn off optimizers by default for operator unit tests.
* Handle edge case of graph with no nodes in partitioner so all execution providers don't need to.
* Comment out change to turn off optimizers for unit tests. Add details on what needs to be done to re-enable.
This change adds a new execution provider powered by [DirectML](https://aka.ms/DirectML).
DirectML is a high-performance, hardware-accelerated DirectX 12 library for machine learning on Windows. DirectML provides GPU acceleration for common machine learning tasks across a broad range of supported hardware and drivers.
The DirectML execution provider is capable of greatly improving evaluation time of models using commodity GPU hardware, without sacrificing broad hardware support or requiring vendor-specific extensions to be installed.
**Note** that the DML EP code was moved verbatim from the existing WindowsAI project, which is why it doesn't yet conform to the onnxruntime coding style. This is something that can be fixed later; we would like to keep formatting/whitespace changes to a minimum for the time being to make it easier to port fixes from WindowsAI to ORT during this transition.
Summary of changes:
* Initial commit of DML EP files under onnxruntime/core/providers/dml
* Add cmake entries for building the DML EP and for pulling down the DirectML redist using nuget
* Add a submodule dependency on the Windows Implementation Library (WIL)
* Add docs under docs/execution_providers/DirectML-ExecutionProvider.md
* Add support for DML EP to provider tests and perf tests
* Add support for DML EP to fns_candy_style_transfer sample
* Add entries to the C ABI for instantiating the DML EP
* use dilations for computing effective kernel shape for conv/pool ops
* when auto_pad is 'VALID', total_pads should be empty
* added support for ArrayFeatureExtractor and ZipMap
* check out_shape only if the output has shape, i.e. output is of TensorType
or SparseTensorType
In ORT, there is only 3 cuda stream: default, HtoD, DtoH. And both HtoD and DtoH are non-blocking stream. Thus, per-thread stream mode doesn't have any benefit.
I also tried in multiple thread env and the legacy mode is also better than per-thread model.
Below is the perf of a 3 layer bert on v100. Unit is ms:
batch size 1:
concurrency | c=1 | c=2 | c=4
legacy | 0.54 | 1.17 | 2.68
per-thread | 0.66 | 1.37 | 2.86
batch size 4:
concurrency | c=1 | c=2 | c=4
legacy | 1.1 | 2.22 | 4.6
per-thread | 1.21 | 2.44 | 4.98
batch size 64:
concurrency | c=1 | c=2 | c=4
legacy | 8.09 | 16.13 | 32.37
per-thread | 8.18 | 16.26 | 32.45
Enable multi-device test for GPU
* Add build pipeline for TensorRT multi-GPU test
* Add code to disable fp16 test if hardware architecture not supported
* Add option to set the device id in onnx_test_runner for model tests
* Adjust ngraph cmake files to onnx 1.5.0
* Enable LSTM reverse direction mode in nGraph EP
* Enable full support for the Split op in nGraph EP
* Revert "Disable the unsigned input Shrink op tests for nGraph until the next update"
This reverts commit 257b42a55bdd98f804d4846868542b8e3aeb4b4e.
* Enable Gather and remove unused subgraph attribute
* Remove the unused param from AppendClusterToSubGraph
* Fix for the incorrect onnx opset version
* Use the r0.26 release branch before the tag is created
* Enable the quantizelinear and dequantizelinear for NGEP
* Use the v0.26.0-rc.2 tag in ngraph.cmake
* Add skip for modes others than default in Pad operator
* Reenable negative axis tests for ngraph
* Use temporary ngraph version
* Use branch name instead of SHA for temporary ngraph branch
* Use ngraph v0.26.0-rc.4
* Remove patch for missing symbol in MKLDNN
* Use MKLDNN 1.0 in ngraph
* Exclude the Pad op for opsets greater than 10
* Disable quantizelinear and dequantizelinear tests for ONNX 1.5.0
* Fix the onnx-headers related compilation errors
* ONNX libs linking fix
* Use a tag for ngraph and support more Pad modes
* Use the v0.26.0 release tag for nGraph
* Update ngraph to RC8 - bigobj flag for Windows builds
* Fix the MKLDNN constexpr error on Windows
* Introduce execution mode for clarity and extensibility; Change Python APIs accordingly; Replace DisableSequentialExecution API with EnableParallelExecution for clarity.
* Fix cuda build
* Modify the test slightly
* Make C and C# APIs consistent with Python.
* Add ability to get symbolic dimension info for graph inputs and outputs.
WIP to get initial feedback.
* Fix linxu build error.
Update C# API and add unit test
* Clarify the two different ways Tensor shape and type info is created. One is from concrete values and one is from a type proto where symbolic dimensions may exist. Doing so allows a change to default to empty strings for the symbolic dimensions if not provided.