* Update ONNX to 1.12 (#11924)
Follow-ups that need to happen after this and before the next ORT release:
* Support SequenceMap with https://github.com/microsoft/onnxruntime/pull/11731
* Support signal ops with https://github.com/microsoft/onnxruntime/pull/11778
Follow-ups that need to happen after this but don't necessarily need to happen before the release:
* Implement LayerNormalization kernel for opset version 17: https://github.com/microsoft/onnxruntime/issues/11916Fixes#11640
* Dll version fix ovep4.1 (#11953)
* Setting default version values for ovep dlls as well
* Update backend_manager.cc
Co-authored-by: mayavijx <mayax.vijayan@intel.com>
Co-authored-by: mohsin <mohsinx.mohammad@intel.com>
* Optimize t5 encoder in beam search (#11926)
* ooptimize t5 encoder
* update
* update
* update
* refactor expand impl
* cuda tests passed
* update
* alignment
* more alignments
* review comments
* Allow saving on CPU usage for infrequent inference requests by reducing thread spinning (#11841)
Introduce Start/Stop threadpool spinning switch
Add a session config option to force spinning stop at the end of the Run()
* Restructure function inliner (#11731)
* Add nested function call tests
* Add overload for Specialize
* Pass symboltable to onnx shape inference
* Avoid renaming empty names
* Enable sequence_map tests which failed before this change
* Deprecate APIs returning raw ptrs and provide replacements (#11922)
Provider better documentation
* register signal ops for opset 17 (#11778)
* Register signal ops for op set 17
Note code is mostly being moved, not added. These ops were previously
only registered as Microsoft contrib ops and only built if
`BUILD_MS_EXPERIMENTAL_OPS=1`. They've been added to the ai.onnx
standard op set in version 17.
Main components of this change:
* Move the kernels from the conrib_ops directory to the
core directory.
* Add function bodies for ms experimental ops. This will allow
old models that use the contrib ops to continue to function.
All the function bodies consist of a single op (the
new standard op), so performance overhead should be minimal.
Minor clean-up also in this change:
* De-duplicate get_scalar_value_from_tensor: put it in a new utils.h.
* Fix some bugs that caused compilation errors with the experimental
ops. Tested with `build.sh --ms_experimental`
* Fix some spelling errors and lint violations.
* Replace a couple of switch statements with `MLTypeCallDispatcher`.
* Use `InlineVector` instead of `std::vector`.
Unblocks https://github.com/microsoft/onnxruntime/issues/11640
* Include opset 15 in Conv+BatchNormalization fusion (#11960)
* Fix WinML Tests are still targetting deprecated (deleted) experimental signal op definitions (#12006)
* fix winml tests
* remove legacy test
* switch idft -> dft+inverse attr
* upgrade opset 13->17 for signal ops tests
* [C# Tests] Add support for double tensor output in TestPreTrainedModels. (#12008)
Add support for double tensor output in TestPreTrainedModels.
* DML EP ResNet50 opset 15 fails in ONNX checker for FusedBatchNormalization lacking training_mode attribute (#12010)
FusedBatchNormalization include training_mode attribute
* Generalize native op creation (#11539)
* create op from ep
* read input count from context
* create holder to host nodes
* fix typo
* cast type before comparison
* throw error on API fail
* silence warning from minimal build
* switch to unique_ptr with deleter to host nodes
* fix typo
* fix build err for minimal
* fix build err for minimal
* add UT for conv
* enable test on CUDA
* add comment
* fix typo
* use gsl::span and string view for Node constructor
* Added two APIs - CopyKernelInfo and ReleaseKernelInfo
* pass gsl::span by value
* switch to span<NodeArg* const> to allow for reference to const containers
* fix typo
* fix reduced build err
* fix reduced build err
* refactoring node construction logic
* rename exceptions
* add input and output count as arguments for op creation
* refactor static member
* use ORT_CATCH instead of catch
* cancel try catch
* add static value name map
* format input definition and set err code
* fix comments
* fix typo
* [DML EP] Pad operator: Handle negative pad counts (#11974)
* Pad fallback to CPU
* Added queryPad in operatorRegistration.cpp
* Acknowledged PR comments
* Used any_of
* used none_of instead of any_of
Co-authored-by: Sumit Agarwal <sumitagarwal@microsoft.com>
* Add warning about future computation change for ConvTranspose with auto_pad (#11984)
* Add warning about future computation change for Convtranspose with auto_pad
* improve msg
* update TODO to make lint happy
* update more contents for warning and add if
* valid was not infected
* move it into kernel registration
* parse auto_pad myself
* try to use conv_transpose_attrs_.auto_pad directly
* update roialign cuda impl to onnx opset16 (#12036)
* roialign opset16
* fix
* fix
* Fix windows eager build break by pinning to torch version 1.11.0 (#12033)
Fix windows and linux eager build to torch 1.11.0.
* Skip Constant Folding for ops producing an optional type output (#11839)
* Disable sequence-type tests since C# infra doesn't support well (#12037)
* Extend lifetime of KernelDef when creating a standalone op (#12057)
place tmp kernel def as local variable to cover the lifetime of kernel creation
* Add targets files for new .net6 frameworks (#12016)
* Add net6 targets.
Remove maccatalyst as we don't have a native build targetting that.
* Set platform in macos targets
* Add targetFramework entries
* Move NativeLib.DllName definition and set using preprocessor values for simplicity. Couldn't get it to build with the preprocessor based setup when it was in a separate file.
Update the nuspec generation to set platform version for .net6 targets. TODO: Validate versions. I copied them from the managed nuget package the packaging pipeline generated prior to adding targets. Possibly w could/should lower some of the versions.
Hopefully the need to specify a version goes away when the release version of VS2022 supports .net6.
* Try android 31.1 as https://github.com/actions/virtual-environments/blob/main/images/win/Windows2022-Readme.md suggests that should be available on the CI machines
* Fix patch version mismatch
Add some extra debug info in case it helps
* Debug nuget location in CI
* Add workspace entry back in
* Add steps
* One more attempt with hardcoded nuget.exe path and original android31.0 version
* Better fix - found explicit nuget download and updated version there.
* flake8 fixes
* Fix black complaints.
* Exit Microsoft_ML_OnnxRuntime_CheckPrerequisites for net6 iOS.
* Removed outdated comment
* Fix DML custom operators which set descriptor heap to command list (#12059)
* Make C# runtest.sh automatically set latest opset (#12039)
* Update C# runtest.sh for opset 17
Should have been part of https://github.com/microsoft/onnxruntime/pull/11924
* get appropriate opset version from onnx doc
* use absolute rather than relative path
* fix typo in var name
* Disable DML command list reuse for Xbox (#12063)
disable cl reuse for xbox
* Add data type check in ConvAddRelu fusion (#12058)
* Add undocumented attribute to disable generation of Java bindings from the Android AAR. (#12075)
The generated bindings causes C# build errors that require workaround code. Disabling generation should avoid the need for any workarounds.
As the user has the C# ORT package with the C# to C bindings there's no need for binding generation that calls the ORT Java API (which is C# -> Java ->C).
* enable the extensions custom build for java and android (#11823)
* generate quantization parameter for outputs (#12089)
* DML EP Update to DML 1.9 (#12090)
* Update to DML 1.9
* Appease obnoxious Python formatting tool
* Fix orttraining-linux-ci-pipeline - Symbolic shape infer (#11965)
fix symbolic shape error due to upgraded numpy + legacy sympy
* check consumers of dq node before swap dq and transpose (#12099)
* check consumers of dq node before swap dq and transpose
* add unit test
Co-authored-by: Gary Miguel <garymiguel@microsoft.com>
Co-authored-by: Preetha Veeramalai <preetha.veeramalai@intel.com>
Co-authored-by: mayavijx <mayax.vijayan@intel.com>
Co-authored-by: mohsin <mohsinx.mohammad@intel.com>
Co-authored-by: Ye Wang <52801275+wangyems@users.noreply.github.com>
Co-authored-by: Dmitri Smirnov <yuslepukhin@users.noreply.github.com>
Co-authored-by: G. Ramalingam <grama@microsoft.com>
Co-authored-by: Dwayne Robinson <dwayner@microsoft.com>
Co-authored-by: Sheil Kumar <smk2007@gmail.com>
Co-authored-by: Edward Chen <18449977+edgchen1@users.noreply.github.com>
Co-authored-by: sumitsays <sumitagarwal330@gmail.com>
Co-authored-by: Sumit Agarwal <sumitagarwal@microsoft.com>
Co-authored-by: Chun-Wei Chen <jacky82226@gmail.com>
Co-authored-by: George Wu <jywu@microsoft.com>
Co-authored-by: Wil Brady <25513670+WilBrady@users.noreply.github.com>
Co-authored-by: Hariharan Seshadri <shariharan91@gmail.com>
Co-authored-by: Wei-Sheng Chin <wschin@outlook.com>
Co-authored-by: Scott McKay <skottmckay@gmail.com>
Co-authored-by: Jeff Bloomfield <38966965+jeffbloo@users.noreply.github.com>
Co-authored-by: Justin Stoecker <justoeck@microsoft.com>
Co-authored-by: Wenbing Li <10278425+wenbingl@users.noreply.github.com>
Co-authored-by: Yufeng Li <liyufeng1987@gmail.com>
Co-authored-by: pengwa <pengwa@microsoft.com>
* Specify list/map capacity when initializing where possible
- This really depends on the use case, but in some cases the array/map resizing can be slightly costly, there is effectively no downside setting the initial capacity for a collection if we know for sure its final size
* Supply list/map capacity when initializing where possible
- This really depends on the use case, but in some cases the array/map resizing can be slightly costly, there is effectively no downside setting the initial capacity for a collection if we know for sure its final size
- Introduce an extra utility to help creating maps with expected capacity
* Move utility function to OrtUtil and drop MapUtil, also add Java doc to method
* Move test to the right class
Java side parts for configuring CUDA and TensorRT.
Adding tests for CUDA and TensorRT. Refactoring library loading logic as provider options need to have their shared library loaded before they can be constructed.
* Add android package build settings for full build
Co-authored-by: gwang0000 <62914304+gwang0000@users.noreply.github.com>
Co-authored-by: Scott McKay <skottmckay@gmail.com>
Co-authored-by: Edward Chen <18449977+edgchen1@users.noreply.github.com>
* squashed commit for standalone tvm execution provider
* critical fix for correct python build with stvm ep
* get tuning log file from ep options. It has priority over AUTOTVM_TUNING_LOG
* updates and fixes
* update parsing of stvm provider options
* add support of external data for onnx model
* add conditional dump of subgraphs
* remove unused code
* get input tensor shapes through provider options. get output shapes for fixed input ones by TVM API
* support AUTO_TVM tuning log file inside ORT. Selector for Ansor and Auto_TVM is provider option (tuning_type)
* add fp16
* add functionality of conversion of model layout to NHWC if need. Necessary parameter was added to STVM provider options
* fix license text in header. fix log format
* small fixes
* fix issues from flake8
* remove model proto construction from GetCapability
* reserve memory for vector of DLTensors
* add simple tutorial for STVM EP
* STVM docs
* jroesch/tvm -> apache/tvm
* remove dead code, unneccessary logs and comments
* fix in readme
* improve tutorial notebook
* tvm update
* update STVM_EP.md
* fix default value
* update STVM_EP.md
* some TODOs for the future development
* shorten long lines
* add hyperlink to STVM_EP.md
* fix Linux CI error
* fix error in csharp test
Co-authored-by: Jared Roesch <jroesch@octoml.ai>
Co-authored-by: Valery Chernov <valery.chernov@deelvin.com>
Co-authored-by: KJlaccHoeUM9l <wotpricol@mail.ru>
* re-hipify all rocm EP sources
* fix all other files affected by re-hipify
* add cuda_provider_factory.h to amd_hipify.py
* do not use cudnn_conv_algo_search in ROCm EP, missing reduce min registration
* Fix ReduceConsts template specialization introduced in #9101.
Fixes the error when building for ROCm 4.3.1:
error: too many template headers for onnxruntime::rocm::ReduceConsts<__half>::One (should be 0)
* fix flake8 error in amd_hipify.py
* speed up hipify with concurrent.futures
* flake8 fix in amd_hipify.py
* First iteration of making cuda a shared provider.
Separated out shared OpKernel change, so doing this to merge with that change.
* More cuda shared library refactoring
* More cuda shared library refactoring
* More build options tested, converted the training ops over.
* Fix merge breaks
* Fix submodules
* Fix submodules
* Fix submodules
* Fix python
* Fix compile errors
* Duplicate symbol fix
* Test fix for ROCM provider
* Another ROCM test workaround
* ROCM Build Test
* ROCM build fix
* ROCM
* ROCM
* ROCM
* ROCM
* ROCM
* ROCM test
* Reduce header dependencies
* Remove redundant namespace
* Test fix for linux
* Fix linux build
* Fix Eigen build error
* Fix unused parameter warning
* Test link error
* Another linker test
* Linker test
* Linker test
* Another test
* Another build test
* Fix linux link error
* Build test
* Fix control flow ops to use common base class with core code
* Remove extra qualifiers
* Fix template syntax for linux
* Fix cuda memory leak
* Fix pybind
* Test disabling cast
* Cleanup
* Restore cuda in test
* Remove more header dependencies
* Test not adding cuda provider to session
* Make GetProviderInfo_CUDA throw
* No-op cuda provider creation
* Fix some setup issues
* Fix memory cleanup on unload
* Diagnostics
* Don't unload library
* Add diagnostics
* Fix deleting registry at right time.
* Test disabling profiler
* Fix merge break
* Revert profiler change
* Move unloading of shared providers into Environment
* Free more global allocations before library unloads
* Add more diagnostics
* Move unloading back to the OrtEnv as there are multiple Environments created during a session.
Remove some library dependencies for tests.
* Fix more cmake files
* ERROR -> WARNING
* Fix python shutdown
* Test not using dml in pipeline
* Change python version and disable dml
* Update python version
* Test adding unload method for shared providers
* Disable DLL test
* Python test
* Revert "Python test"
This reverts commit c7ec2cfe98.
* Revert "Disable DLL test"
This reverts commit e901cb93aa.
* Revert "Test adding unload method for shared providers"
This reverts commit c427b78799.
* Point to RyanWinGPU
* Revert python version
* Fix id_to_allocator_map
* Another python exit test
* Remove extra debug messages
Try a more clean python shutdown through DllMain
* Revert DllMain idea, it didn't work
* Merge conflicts
* Fix merge with master issues.
* Comments
* Undo edit to file
* Cleanup + new training ops
* Revert yml changes
* Fix another merge error
* ROCM fix
* ROCM fix v2
* Put back Linux hack, it is necessary
* Stupid fixes
* Fix submodule out of sync
* ROCM fix 3
* ROCM 4
* Test java fix
* Fix typos
* Java test on my VM
* Fix build error
* Spotless fix
* Leave temp file around to load properly
* Fix cleanup on exit
* Fix break
* Java comments
* Remove LongformerAttentionBase workaround
* Spotless fix
* Switch yml back to regular build pool
* Revert "Switch yml back to regular build pool"
This reverts commit be35fc2a5a.
* Code review feedback
* Fix errors due to merge
* Spotless fix
* Fix minimal build
* Java fix for non cuda case
* Java fix for CPU build
* Fix Nuphar?
* Fix nuphar 2
* Fix formatting
* Revert "Remove LongformerAttentionBase workaround"
This reverts commit 648679b370.
* Training fix
* Another java fix
* Formatting
* Formatting
* For orttraining
* Last orttraining build fix...
* training fixes
* Fix test provider error
* Missing pass command
* Removed in wrong spot
* Python typo
* Python typos
* Python crash on exit, possibly due to unloading of libraries.
* Remove test_execution_provider from training build
Only enable python atexit on windows
Remove assert on provider library exit
* Still can't unload providers in python, alas.
* Disable Nvtx temporarily
* MPI Kernels for Training
* MPI Kernels part 2
* Patch through INcclService
* Oops, wrong CMakeLists
* Missing namespace
* Fix missing ()
* Move INcclService::GetInstance around to link nicer
* Missing }
* Missing MPI libraries for Cuda
* Add extra GetType functions used by MPI
* Missing Nccl library
* Remove LOGS statements as a test
* Add in a couple more missing GetType methods
* Update comments
* Missed a logging reference in mpi_context.h
* Convert aten_op to shared (due to marge with master)
* Test moving DistributedRunContext instance into shared provider layer
(with purpose error to verify it's being built properly)
* Test passed, now with fix
* Missing static
* Oops, scope DistributedRunContext to just NCCL
* Merge related issues and code review feedback.
* Merge error
* Bump to rel-1.9.1 (#7684)
* Formatting
* Code review feedback for Java build on non Windows
* Remove cupti library dependency from core library
* Test Java pipeline fix
* Linux build fix
* Revert "Linux build fix"
This reverts commit a73a811516.
* Revert "Remove cupti library dependency from core library"
This reverts commit 6a889ee8bf.
* Packaging pipeline fixes to copy cuda shared provider for tensorrt & standard packages
* Add cuda to Tensorrt nuget package
* onnxruntime_common still has a cuda header dependency
Co-authored-by: ashbhandare <ash.bhandare@gmail.com>
* test
* [gwang] make cmake compile work
* [gwang] enble build apks
* some build update
* add simple sigmoid test android project and cmake
* add build.py
* refine and remove unused import lib
* address CR comments
* remove unnecessary files
* add README.md
* minor update
* remove
* minor change
* fix ci failure and minor update
* fix typo in project folder
* remove
* remove and minor update
* refine
* minor fix
* fix
* fix typo
* add gradle spotlessApply task to fix CI failure
* fix
* enable spotlessApply in build gradle
* revert some changes
* minor fix
* run spotless apply for format
* address CR comments and fix CI version and format
* refine
* Refine
* address comments
* refine
* refine
* modify
* reformat
* resolve version conflicts
* minor update
* minor update
* address comments
* minor update
Co-authored-by: Guoyu Wang <wanggy@outlook.com>
Add providers for CoreML, ROCM, NNAPI, ArmNN
Adding the structs for OrtCUDAProviderOptions and OrtOpenVINOProviderOptions
Updating NNAPI flags.
Adding the new CoreML flag.
Adding hooks to the build system to tell Java about the new providers.
* Updates for Gradle 7.
* Adding support for OrtThreadingOptions into the Java API.
* Fixing a typo in the JNI code.
* Adding a test for the environment's thread pool.
* Fix cuda test, add comment to failure.
* Updating build.gradle
* Adding Java support for getAvailableProviders, addFreeDimensionOverrideByName, disablePerSessionThreads and getProfilingStartTimeNs.
* Fixing copyright years, running spotless and adding javadoc and an accessor to OrtProvider.
* Renaming OrtSession.getProfilingStartTimeInNs.
* Removing ngraph as it's been deprecated.
* Rearranging checks in onnxruntime_mlas.cmake to pickup Apple Silicon.
On an M1 Macbook Pro clang reports:
$ clang -dumpmachine
arm64-apple-darwin20.1.0
So the regex check needs to look for "arm64" first, as otherwise it
matches 32-bit ARM and you get NEON compilation failures.
* Adding Java side library loading support for Apple Silicon (and other aarch64 architectures).
* Adding Qgemm fix from @tracysh
* Fixes the java packaging on Windows.
* Missed a check in the java platform detector.
* Remove nGraph Execution Provider
Pursuant to nGraph deprecation notice: https://github.com/microsoft/onnxruntime/blob/master/docs/execution_providers/nGraph-ExecutionProvider.md#deprecation-notice
**Deprecation Notice**
| | |
| --- | --- |
| Deprecation Begins | June 1, 2020 |
| Removal Date | December 1, 2020 |
Starting with the OpenVINO™ toolkit 2020.2 release, all of the features
previously available through nGraph have been merged into the OpenVINO™
toolkit. As a result, all the features previously available through
ONNX RT Execution Provider for nGraph have been merged with ONNX RT
Execution Provider for OpenVINO™ toolkit.
Therefore, ONNX RT Execution Provider for **nGraph** will be deprecated
starting June 1, 2020 and will be completely removed on December 1,
2020. Users are recommended to migrate to the ONNX RT Execution Provider
for OpenVINO™ toolkit as the unified solution for all AI inferencing on
Intel® hardware.
* Remove nGraph Licence info from ThirdPartyNotices.txt
* Use simple Test.Run() for tests without EP exclusions
To be consistent with rest of test code.
* Remove nGraph EP functions from Java code
* Add options for nnapi ep
* Add nnapi flags test
* add comments
* Add flag comments
* Make the flags bitset const
* Fix build break
* Add stub changes to java and c# api
* Fix java related build break
* Fix java build break
* Switch to bit flags instead of bitset