* Fix SAME_UPPER/SAME_LOWER (auto_pad attribute) in ConvTranspose
* Bump ONNX 1.10.2 globally
* load ONNX_VERSION from VERSION_NUMBER
* /
* revert deprecate warning in ORT 1.12
* add a comment about why removing cntk_simple_seg
* correct the implem in DML as well
* Update C# runtest.sh for opset 17
Should have been part of https://github.com/microsoft/onnxruntime/pull/11924
* get appropriate opset version from onnx doc
* use absolute rather than relative path
* fix typo in var name
* Add .net6 support to the C# nuget package.
Currently requires jumping through a lot of hoops due to .net 6 only being supported in the preview release of VS 2022.
Build existing targets using msbuild.
Add .net6 targets and build using dotnet.
Create nuget package with combined targets.
A few misc automated changes from VS to spacing and adding a couple of properties.
* Rework the EP factory creation setup so we're not cut-and-pasting function declarations in multiple places.
Convert append EP for SNPE to be generic, and also use for XNNPACK.
Add XNNPACK to C# API
* Don't need stub for MIGraphX as it's using provider bridge.
* Remove old 'create' functions that aren't applicable now that the EPs are built as separate libraries.
* Only use EPs that require the layout transform if the opset is supported by the layout transformer.
* Update wasm registration of xnnpack.
* Include onnxruntime binary when not using pacakge referene or uap app.
* Remove the lib\uap10.0 build from the nuget package - causing conflicts
* Add UWP test
* remove build files
* remove local change
* reset mimalloc and onnx-tensorrt
* change username to Microsoft
Co-authored-by: Sheil Kumar <sheilk@microsoft.com>
* squashed commit for standalone tvm execution provider
* critical fix for correct python build with stvm ep
* get tuning log file from ep options. It has priority over AUTOTVM_TUNING_LOG
* updates and fixes
* update parsing of stvm provider options
* add support of external data for onnx model
* add conditional dump of subgraphs
* remove unused code
* get input tensor shapes through provider options. get output shapes for fixed input ones by TVM API
* support AUTO_TVM tuning log file inside ORT. Selector for Ansor and Auto_TVM is provider option (tuning_type)
* add fp16
* add functionality of conversion of model layout to NHWC if need. Necessary parameter was added to STVM provider options
* fix license text in header. fix log format
* small fixes
* fix issues from flake8
* remove model proto construction from GetCapability
* reserve memory for vector of DLTensors
* add simple tutorial for STVM EP
* STVM docs
* jroesch/tvm -> apache/tvm
* remove dead code, unneccessary logs and comments
* fix in readme
* improve tutorial notebook
* tvm update
* update STVM_EP.md
* fix default value
* update STVM_EP.md
* some TODOs for the future development
* shorten long lines
* add hyperlink to STVM_EP.md
* fix Linux CI error
* fix error in csharp test
Co-authored-by: Jared Roesch <jroesch@octoml.ai>
Co-authored-by: Valery Chernov <valery.chernov@deelvin.com>
Co-authored-by: KJlaccHoeUM9l <wotpricol@mail.ru>
Add Xamarin support to the ORT nuget packages.
- Update C# code to support Xamarin builds for iOS and Android
- refactor some things to split out common code
- include iOS and Android ORT native shared library in native nuget package
* Revert "Cleanup C# bindings to add EP (#8810)"
This reverts commit b21ea00020.
* Add back in a minimal set of changes.
Provide stubs in for a limited set of things
- things called from C# using a static lib of ORT built for mac/ios
- things in OrtApis that are not included in the build by default
- things in OrtApis that are excluded in a minimal build
* Cleanup order or EPs in test
* Fix unused function in ROCM build
Fix C# add EP bindings.
Add stubs to ORT so that if EP is not included in the build we return a graceful error message.
Move declaration of stubs into C API and out for EP so they're in one place and are easier to use (no extra header required in the C/C++ world and consistent with the CUDA EP setup).
Fix inconsistency in ROCM EP.
Cleanup a few other things.
* Merge CPU/GPU nuget pipeline
* Include TensorRT EP libraries into existing GPU nuget package pipeline
* modify to use correct YAML
* Modify for test
* modify for test
* Add depedance
* Add depedance (cont.)
* modify for test
* Add create TensorRT nuget package
* modify for test
* modify for test
* Merge CPU/GPU nuget pipeline
* Include TensorRT EP libraries into existing GPU nuget package pipeline
* modify to use correct YAML
* Modify for test
* modify for test
* Add depedance
* Add depedance (cont.)
* modify for test
* Add create TensorRT nuget package
* modify for test
* fix merge bug
* code refactor
* code refactor
* modify for test
* modify for test
* modify for test
* modify for test
* modify for test
* modify for test
* cleanup
* modify for test
* fix bug
* modify for test
* refactor
* fix bug and test
* Modify for test
* Modify for test
* Modify for test
* Modify for test
* Prepare for PR
* Prepare for PR
* code refacotr from review
* Remove naming 'Microsoft.ML.OnnxRuntime.TensorRT' to avoid confusion
* Add linux TensorRT libraries
* Remove redundant variable in YMAL
* revert file
* undo revert file
* Modify regular expression so that it can capture the correct file
* Remove newline at end of file
* small fix
* Revert to CUDA11.1 on Windows
* Add unit tests for nuget package on Linux
Co-authored-by: Changming Sun <chasun@microsoft.com>
Merge CPU/GPU nuget pipeline. The old GPU nuget pipeline will be only for DML.
TODO: the result GPU package contains PDB files for some of the DLLs, but not all. It is due to the refactoring of CUDA EP to pluggable DLLs. At that time we forgot to copy the PDB files. However, I can't add them in now. Because currently the package is already 220MB large. If the missed PDB files were added, then it will be oversize. nuget.org doesn't accept >250MB packages.
* updates for picking pnnx commit
* add tests filter to c# tests
* plus test fixes
* fix versioning for contrib ops
* fix tests
* test filter for optional ops
* more versioning related updates
* fix test
* fix layernorm spec
* more updates
* update docs
* add more test filters
* more filters
* update binary size threshold
* update docs
* plus more fixes
* updates per review
* update to release commit
* add filters for optional type tests
* plus updates
1. Update SDLNativeRules from v2 to v3. The new one allows us setting excluded paths.
2. Update TSAUpload from v1 to v2. And add a config file ".gdn/.gdntsa" for it.
3. Fix some parentheses warnings
4. Update cmake to the latest.
5. Remove "--x86" build option from pipeline yaml files. Now we can auto-detect cpu architecture from python. So we don't need to ask user to specify it.
* prepare for C# to configure provider options
* add c# code
* revert modification
* Add update provider info configuration in trt ep side
* fix bugs
* fix bug for compiler error C2259
* Add c# test
* fix bug
* fix bug
* Properly deal with string
* Add c# api for accepting trt provider options
* fix bug
* Modify C# test
* add shared lib test
* Add get provider options functionality
* clean up
* clean up
* fix bug
* fix bugs for CI
* Fix bugs for CI and documentation
* Move TRT EP provider options related functions out of C API
* revert
* fix bug
* refactor
* add check for provider options string
* code refactor
* fix CI bug
* Fix CI bugs
* clean up
* fix bug
* Fix bug for Post Analysis
* fix accidental bug
* Add API_IMPL_BEGIN/API_IMPL_END
* clean up
* code refactor
* code refactor
* fix CI fail
* fix bug
* use string append
* Change the code to better handle strncpy and string append
1. Remove some unused code and simplify tools/ci_build/github/linux/run_dockerbuild.sh.
2. Enable Nuget CUDA tests. The original design was we could leverage Directory.Build.props and let cmake generate the required properties(USE_CUDA/...) there. However, in nuget packaging pipeline we test the package on a different host that doesn't run cmake command and doesn't have the auto-generated Directory.Build.props file.