* Add amd migraphx execution provider to onnx runtime
* rename MiGraphX to MIGraphX
* remove unnecessary changes in migraphx_execution_provider.cc
* add migraphx EP to tests
* add input requests of the batchnorm operator
* add to support an onnx operator PRelu
* update migrapx dockerfile and removed one unused line
* sync submodules with mater branch
* fixed a small bug
* fix various bugs to run msft real models correctly
* some code cleanup
* fix python file format
* fixed a code style issue
* add default provider for migraphx execution provider
Co-authored-by: Shucai Xiao <Shucai.Xiao@amd.com>
* Initial update of readme
* Readme updates
* Review of consolidated README (#3930)
* Proposed updates for readme (#3953)
I found some of the information was duplicated within the doc, so attempted to streamline
* Fix links
* More updates
- fix build instructions
- nodejs doc reorganization
- roadmap update
- version fixes
* Update ORT Server build instructions
* More doc cleanup
* fix python dev notes name
* Update nodejs and some links
* sync eigen version back to master
* Minor fixes
* add nodsjs to sample table of content
* Update README.md
* Update README.md
* Update README.md
* Update README.md
* Update README.md
* Update README.md
* address PR feedback
* address PR feedback
* nodejs build instruction
* Update Java instructions to include gradle
* Roadmap refresh
Reformat some data, fix link, minor rewording
* Clarify Visual C++ runtime req
Co-authored-by: Nat Kershaw (MSFT) <nakersha@microsoft.com>
Co-authored-by: Prasanth Pulavarthi <prasantp@microsoft.com>
Co-authored-by: manashgoswami <magoswam@microsoft.com>
This change adds a new execution provider powered by [DirectML](https://aka.ms/DirectML).
DirectML is a high-performance, hardware-accelerated DirectX 12 library for machine learning on Windows. DirectML provides GPU acceleration for common machine learning tasks across a broad range of supported hardware and drivers.
The DirectML execution provider is capable of greatly improving evaluation time of models using commodity GPU hardware, without sacrificing broad hardware support or requiring vendor-specific extensions to be installed.
**Note** that the DML EP code was moved verbatim from the existing WindowsAI project, which is why it doesn't yet conform to the onnxruntime coding style. This is something that can be fixed later; we would like to keep formatting/whitespace changes to a minimum for the time being to make it easier to port fixes from WindowsAI to ORT during this transition.
Summary of changes:
* Initial commit of DML EP files under onnxruntime/core/providers/dml
* Add cmake entries for building the DML EP and for pulling down the DirectML redist using nuget
* Add a submodule dependency on the Windows Implementation Library (WIL)
* Add docs under docs/execution_providers/DirectML-ExecutionProvider.md
* Add support for DML EP to provider tests and perf tests
* Add support for DML EP to fns_candy_style_transfer sample
* Add entries to the C ABI for instantiating the DML EP
* Updates
* Remove preview texts
* Update README.md
* Updates
* Update README.md
* Update README.md
* Minor wording update
* Update README.md
* Update doc on CUDA version
* revert update
* Update readme for issue #1558
* Clean up example section
* Cosmetic updates
- Add a index of build instructions for browsability
- Update build CUDA version from 9.1 to 10
* Fix broken link
* Update README to reflect upgrade to pip requirement
* Update CuDNN version for Linux Python packages
* Clean up content
Updated ordering and add table of contents
* Minor format fixes
* Move Android NNAPI under EP section
* Add link to operator support documentation
* Fix typo
* typo fix
* remove todo section
* Mention OrtCreateSessionFromArray in C API doc
* Don't create the default allocator every single time. Rename API accordingly.
* Don't create the default allocator every single time. Rename API accordingly.
* updates...
* updates...
* PR comments
* fix typo in license header
* fix build