* Remove nGraph Execution Provider
Pursuant to nGraph deprecation notice: https://github.com/microsoft/onnxruntime/blob/master/docs/execution_providers/nGraph-ExecutionProvider.md#deprecation-notice
**Deprecation Notice**
| | |
| --- | --- |
| Deprecation Begins | June 1, 2020 |
| Removal Date | December 1, 2020 |
Starting with the OpenVINO™ toolkit 2020.2 release, all of the features
previously available through nGraph have been merged into the OpenVINO™
toolkit. As a result, all the features previously available through
ONNX RT Execution Provider for nGraph have been merged with ONNX RT
Execution Provider for OpenVINO™ toolkit.
Therefore, ONNX RT Execution Provider for **nGraph** will be deprecated
starting June 1, 2020 and will be completely removed on December 1,
2020. Users are recommended to migrate to the ONNX RT Execution Provider
for OpenVINO™ toolkit as the unified solution for all AI inferencing on
Intel® hardware.
* Remove nGraph Licence info from ThirdPartyNotices.txt
* Use simple Test.Run() for tests without EP exclusions
To be consistent with rest of test code.
* Remove nGraph EP functions from Java code
* Add options for nnapi ep
* Add nnapi flags test
* add comments
* Add flag comments
* Make the flags bitset const
* Fix build break
* Add stub changes to java and c# api
* Fix java related build break
* Fix java build break
* Switch to bit flags instead of bitset
* add Python API for getProfilingStartTime
* debug for using Python API
* add in C# api
* use uint intead of uint64_t to prevent warning
* typo for GetProfilingStartTimeNs
* remove const
* Update onnxruntime/python/session.py
Co-authored-by: Pranav Sharma <emailpranav@gmail.com>
* remove unnecessary return
* Add Python unit test
* Add C# unit test and refactor Python test
* use ulong in C# for uint64_t in C++
* remove time.monotonic_ns
* syntax: remove public for inner function
* correct the API's order
* getprofilingstarttime after run
* Correct the right order in NativeMethod.cs
* update order
* nit: remove spaces
* Update csharp/src/Microsoft.ML.OnnxRuntime/InferenceSession.cs
Co-authored-by: Guoyu Wang <62914304+gwang-msft@users.noreply.github.com>
* use the updated function
* add comment about the precision
* add more comments
* add session.py back
* fix flake8
* remove session.py
* Add comments in C, C#, Python APIs about precision
Co-authored-by: Pranav Sharma <emailpranav@gmail.com>
Co-authored-by: Guoyu Wang <62914304+gwang-msft@users.noreply.github.com>
* Allow sharing of initializers between sessions.
* Allow sharing of initializers between sessions (2).
* Add test for C#
* Add test for C#; address PR comments
* Address PR comments
Moved AddInitializer logic to internal session options
Added tests for owned buffer
Clarified documentation
Fix bug where memory info and not device was getting compared
* Fix test
* Fix training build
* Add ver 5 end marker and ver 6 starter, add scenario and usage examples.
* Add SetLanguageProjection C Api and use it in four projections
* static cast enum languageprojection to uint32_t
* resolve comments
* fix typo and line added unintentionally
* revert unecessary change
* reorder c# api
* add TensorAt and CreateAndRegisterAllocator in Csharp to keep the same order as C apis
* adding generic configurations for session options
* fix a build break on linux
* fix training ci build break
* fix training ci build break
* addressed CR comments
* fix traning ci build break
* move config_key from enum to string
* add c# api
* add python api
* fix build break
* move prepacking from 2 new api entries to session options configs
* fix traning ci build break
* add python test, update some comments, move const key definition to avoid build break
* addressed comments
* move definitions of keys to common.h
* move api to version 5
* remove accidental change in build.py
* remove pragma to avoid build break
* addressed CR comments
* fix the python build break, and move location of config keys definition
* small typo changes
* Add amd migraphx execution provider to onnx runtime
* rename MiGraphX to MIGraphX
* remove unnecessary changes in migraphx_execution_provider.cc
* add migraphx EP to tests
* add input requests of the batchnorm operator
* add to support an onnx operator PRelu
* update migrapx dockerfile and removed one unused line
* sync submodules with mater branch
* fixed a small bug
* fix various bugs to run msft real models correctly
* some code cleanup
* fix python file format
* fixed a code style issue
* add default provider for migraphx execution provider
Co-authored-by: Shucai Xiao <Shucai.Xiao@amd.com>
* Fix C# log APIs. Fixes github issue #3409.
* Fix build error due to accidental duplication of GraphOptimizationLevel
* Fix runoptions
* Fix broken test. Add --blame switch to dotnet test cmd line to print the failed test in case of crash.
* Initial commit
* More changes
* More changes
* More changes 3
* More changes 4
* More changes 5
* More changes 5
* More changes 6
* More changes 7
* More changes 8
* Remove C# ifdefs
* More changes 10
* More changes 11
* YAML changes for other release pipelines
* Add release notes metadata
* Props and Targets change
* Add CSHarp proj
* More changes 12
* More changes
* Minor fix
* Minor fix
* Fix yaml
* Some missing logic for winml
* Minor update
* Fix casing for winmd file
* Fix casing
* Add targets and props for managed section into native nuget
* revert file
* a
* Fix C# handling of unicode strings
* more tests
* check for handle before freesing
* variable reuse efficiency
* refactor and cleanup utf8 o utf16 conversion block
- Added C-API test for loading custom op shared lib.
- Made some changes in C++ api header and C-api implementation to get it working.
- Added C# API and corresponding test for loading custom op shared library.
* Introduce execution mode for clarity and extensibility; Change Python APIs accordingly; Replace DisableSequentialExecution API with EnableParallelExecution for clarity.
* Fix cuda build
* Modify the test slightly
* Make C and C# APIs consistent with Python.
* Add ability to get symbolic dimension info for graph inputs and outputs.
WIP to get initial feedback.
* Fix linxu build error.
Update C# API and add unit test
* Clarify the two different ways Tensor shape and type info is created. One is from concrete values and one is from a type proto where symbolic dimensions may exist. Doing so allows a change to default to empty strings for the symbolic dimensions if not provided.
* add ctor overloads that accept model byte array
* doxygen. mark Init method as private.
* doxygen
* rename test method for clarity
* PR feedback - add two overloads that accept either model path or model byte array
* update native signature to align with latest codebase
* fix native call
* Mention OrtCreateSessionFromArray in C API doc
* Add C API for free dim override
* Add C API for free dim override, fix missing API mention in InferenceTest.cs, fix confusing print statement in perf_test.
* Remaining C#files
* fix c# build
* Run the tests in blame mode. This option is helpful in isolating a problematic test causing the test host to crash.
* fix order
Description: Refine threading control options and move inter op thread pool to session state.
Added thread_utils.h/cc to centralize the decision around the thread pool size under various conditions.
Motivation and Context
Currently the thread pool size of the parallel executor is hardcoded to 32 for some reason. This PR makes the options to configure the thread pool sizes clearer.
* Implement Nuphar execution provider
Nuphar execution provider is a TVM-based compilation provider. It has shown great speedups for RNN models using Scan.
This PR is mainly for a preview of the shared codegen library for other TVM-based providers.
* Fix submodules
* Fix TVM submodule
* Update Nuphar to latest and resolve confliction
* Remove stale files caused by merge -X theirs
* Revert heap buffer change to not introduce onnxruntime_framework into onnxruntime_perf_test
* Fix bad merge
* Merge from Nuphar
* Fix warning treated as error, revert some unnecessary changes
* Revert some more test changes
* Some more test revert or comments to make review easier
New tests could be added later
* One more revert of unnecessary changes
* More change revert. Test could be added back later.
* Mention OrtCreateSessionFromArray in C API doc
* Don't create the default allocator every single time. Rename API accordingly.
* Don't create the default allocator every single time. Rename API accordingly.
* updates...
* updates...
* PR comments
* fix typo in license header
* fix build
- Support bool-Tensor and int8-Tensor in input-output of C# api
- Support string-tensor as input in C# api
- Fix a bug in InferenceSession.Run() -- RunOptions was not passed into the native call
* Mention OrtCreateSessionFromArray in C API doc
* review changes
* use enum for graph optimization level
* Use explicit values for enums
* updates...
* Add friendly enum for graph optimization levels in C, C# and Python APIs.
* Fix linux build
* Fix build breakage due to master merge
* PR comments
* Refactor C# to handle x86
* update run script
* Add Native win x86 tests
* Add native x86 tests for Linux
* Update linux tests scripts to control which tests are run
* update linux image name for x86 to prevent using cached image
* update to not run unit python unit tests unless pybind is specified
* remove --build_wheel as a core common arg. Python cannot run on x86 build
* update OrtGetNumOfDimensions to OrtGetDimensionsCount in rest of C#