* adding generic configurations for session options
* fix a build break on linux
* fix training ci build break
* fix training ci build break
* addressed CR comments
* fix traning ci build break
* move config_key from enum to string
* add c# api
* add python api
* fix build break
* move prepacking from 2 new api entries to session options configs
* fix traning ci build break
* add python test, update some comments, move const key definition to avoid build break
* addressed comments
* move definitions of keys to common.h
* move api to version 5
* remove accidental change in build.py
* remove pragma to avoid build break
* addressed CR comments
* fix the python build break, and move location of config keys definition
* small typo changes
* Sahar/csharp support openvino (#4703)
* Temp changes and include openvino to ensure nuget package is created with linux till we configure azure ci pipeline
* string id change
* native nuget indentation changes
* documentation changes
* Update Openvino_execution_provider.md
Documentation includes openvino execution provider
* Update OpenVino-ExecutionProvider.md
update details to build csharp api for openvino execution provider .
* vadm backend revert
* Update Openvino-Execution-Provider.md
updated for review comments
* Update OpenVino-Execution-Provider.md
* Update OpenVINO-ExecutionProvider.md
* nuget package custome support for openvino
change in native nuget spec python script for including linux runtime
* change to make path to boolean flag
* removed the tab
* Update OpenVINO-ExecutionProvider.md
updated for review comments
* chnages to include pep8 warnings
modification to documentation
Co-authored-by: saharfraza <sfatima.3001@gmail.com>
Co-authored-by: sfatimar <sahar.fatima@intel/com>
* Changes to include csharp support for openvino
* Fix flake error
* Fix
Co-authored-by: sfatimar <64512376+sfatimar@users.noreply.github.com>
Co-authored-by: saharfraza <sfatima.3001@gmail.com>
Co-authored-by: sfatimar <sahar.fatima@intel/com>
1. Publish the image ACR, instead of building it every time for every PR
2. Make USE_MKLML and USE_OPENMP be able to co-exist. Currently both of them are enabled in our Linux CI build but indeed only one of them is taking effect.
3. Split nuphar and DNNL to separated pipelines.
4. Fix two warnings in onnxruntime/core/optimizer/matmul_scale_fusion.cc and onnxruntime/test/tvm/tvm_basic_test.cc.
5. Update the manylinux2010_x86_64 image to the latest.
* Temp changes and include openvino to ensure nuget package is created with linux till we configure azure ci pipeline
* string id change
* native nuget indentation changes
* documentation changes
* Update Openvino_execution_provider.md
Documentation includes openvino execution provider
* Update OpenVino-ExecutionProvider.md
update details to build csharp api for openvino execution provider .
* vadm backend revert
* Update Openvino-Execution-Provider.md
updated for review comments
* Update OpenVino-Execution-Provider.md
* Update OpenVINO-ExecutionProvider.md
* nuget package custome support for openvino
change in native nuget spec python script for including linux runtime
* change to make path to boolean flag
* removed the tab
* Update OpenVINO-ExecutionProvider.md
updated for review comments
* chnages to include pep8 warnings
modification to documentation
Co-authored-by: saharfraza <sfatima.3001@gmail.com>
Co-authored-by: sfatimar <sahar.fatima@intel/com>
* bump cswinrt version
* add cswinrt
* test dotnetcore 3.0
* rename buildpacakge source
* set folder path to the package source and not the version
* refactor .netframework tests
* build .net core anycpu
Co-authored-by: Sheil Kumar <sheilk@microsoft.com>
* build e2e cppwinrt tests
* add use nuget task
* make all referenced to package version prop/target-ified
* remove dupe props/targets reference
* work around project.assets.json error by deleting it
* powershell test invocation
* switch to batch script
* print debug info
* update x86->x64
* stdio.h
* pushd/popd
* add csharp tests
* package.config -> packages.config
* typo
* x86 -> anycpu
* debug is default
* add test path
* update csproj as well
* debug
* really replace all package versions
* debug output
* really use [PackageVersion]
* sleep intead of converting async operation to task and waiting
* dont close software bitmap
* switch to powershell script
* remove binding check
* continue on failure
* continuse on error action
* continueOnError and errorActionPreference
* tabbing
Co-authored-by: Sheil Kumar <sheilk@microsoft.com>
* Add build option to disable traditional ML ops from the binary.
* Fix python tests by splitting tests for ML ops to a separate file. Exclude ML tests from onnx_test_runner and C# tests. Exclude ML op sources.
* Update Edge pkg pipelines with new MLops env variable and fix C# packaging pipeline tests to skip ML ops.
1. Enlarge the read buffer size further, so that our code can run even faster. TODO: need apply the similar changes to python some other language bindings.
2. Add coreml_VGG16_ImageNet to the test exclusion set of x86_32. It is not a new model but previously we didn't run the test against x86_32.
1. Fix the nuget cpu pipeline and put code coverage pipeline back.
2. Reduce onnx_test_runner's default logging level from WARNING to ERROR. Because there are too many log messages now.
3. Enlarge the protobuf read buffer size for onnx_test_runner. It was missed from PR #4020.
* Add amd migraphx execution provider to onnx runtime
* rename MiGraphX to MIGraphX
* remove unnecessary changes in migraphx_execution_provider.cc
* add migraphx EP to tests
* add input requests of the batchnorm operator
* add to support an onnx operator PRelu
* update migrapx dockerfile and removed one unused line
* sync submodules with mater branch
* fixed a small bug
* fix various bugs to run msft real models correctly
* some code cleanup
* fix python file format
* fixed a code style issue
* add default provider for migraphx execution provider
Co-authored-by: Shucai Xiao <Shucai.Xiao@amd.com>
* Fix C# log APIs. Fixes github issue #3409.
* Fix build error due to accidental duplication of GraphOptimizationLevel
* Fix runoptions
* Fix broken test. Add --blame switch to dotnet test cmd line to print the failed test in case of crash.
* add windowsai.yml for new Microsoft.AI.MachineLearning nuget
* temporarily add windowsai.yml to gpu.yml
* pass in build arch
* remove install onnx task
* no dml for arm or arm64
* refactor nuget pipeline defs
* update package creation
* pass in build and sources path
* missing hyphens
* copy license file
* fix parameter variable
* disable arm builds for now
* remove commented script block
* download pipeline atifcat name update
* set working dir
* Add bundling nuget script
* path combine
* null path
* combine needs parentheses
* binplace microsoft.* dlls in new nuget package
* update artifact name
* move merged nuget to artifacts directory
* move to merged subfolder in artifacts staging dir
* forward slash to back
* enable arm
* vcvarsall needs x64 vars setup
* Run Tests
* fix tests
* move global variables
* update yml to not have global variable in template
* removed parameters
* fixes
* Add build arch as an env variable
* ne not neq
* %Var% for batch script
* dont pass argument for x64
* disable arm tests
* skip csharp/cxx tests for microsoft nuget package
* remove test-win as it tests only c# cxx and capi
* test build for store apps
* dont build for store
* tools/nuget/generate_nuspec_for_native_nuget.py
* remove args.
* add new props and targets for microsoft.ai
* make windowsai props/targets static
* add dependency
* dont ship dot net props
* Remove c# fom windowsai nuget
* copy license file
* native packages must have win10 as the platform, not win
* cuda header in wrong if branch
* no dml for arm builds
* only build dml for x64/ x86
* User/sheilk/props update (#3616)
* prelim store work
* props
* Fix desktop nuget props/targets
* clean up targets and make store apps work
Co-authored-by: Sheil Kumar <sheilk@microsoft.com>
* update windowsai.yml with latest
* remove extra dloadhelpers
* Add abi headers to abi dir, and reference native includes
* update windowsai.yml
* minor update
* remove parameters
* add doesrp param
* hard code esrp to true
* add directml for x86/x64
* revert gpu yml changes
* add store builds
* add store builds
* add checks again in old way
* dup job names for store and desktop builds
* move all of the runtime binaries to win10 folder
* only set safeseh on x86
* disable the store builds for now... missing msvcprt.lib
* copy paste deletion...
* switch back to win- (#3646)
Co-authored-by: Sheil Kumar <sheilk@microsoft.com>
* use stahlworks
* & not supported in ado
* add cuda to cpu nuget(???) and EnableDelayedExpansion to enable x86 dml package
* revert nocontribops
* add underscore...
* extra win/win10 change
* merged nuget... still not being bundled...
* files in merged directory
* missing parens causing dml to be included in cpu package
* more diagnostic info
* switch dir to get-childitem
* wait for compression to complete
* add winml_adapter to mkml and gpu packages
* enable_wcos
* add mklml binaries
* props and targets missing from mklml
Co-authored-by: Sheil Kumar <sheilk@microsoft.com>
* Added FP16 transformations
* Revert "Added CMAKE_BUILD_TYPE to make building dynamic"
This reverts commit d3e17af1af655cfdc4d2fec33f52055caa525e85.
* Added FP16 transformations for FP16 builds
* Backend logic cleanup
Cleans the backend(intel_graph.*) code in the following ways:-
1. Minimize global usage: Since all the IR graphs need to be
re-generated on every Infer, it is bad practice to rely on globals
for their saving and usage as there would be multiple readers and
writers to the same global variable leading to incorrect usages or
contentions. This change replaces globals with locals where possible.
This change also fixes an existing bug with due to
incorrect global usage.
2. Remove all unused functions.
3. Remove all unused headers and prepocessor directives.
* removed commented out code
* Disabled default optimization for Intel EP
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Fix missed plugins.xml for python bindings
* Fixed the build after latest master changes
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Disabled unsupported ops for accelerators
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Added some more disabled ops
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Added environment variable to enable debugging
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Added more debug statements
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Fixed unsupported ops list for GPU and VPU
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Fixed unsqueeze unit tests
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Added error message to the status
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Overwrite Model proto with shape info from data
Overwrites the shape info of Model proto with the shape from
actual input data. Needed for inferring models with Dynamic
shapes.
* Removed print statement and disabled where op
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Disabled Reshape with Empty initializer
* Added more debug statements for 1P
* Don't allow 1D inputs with symbol for dimension
* Disabled some 3rd phase ops
* Disabled split and added zero dimension check for OutputDefs
* Cleanup zero dimensionality check
* Added different data type check for inputs and initializers
* Added conditions for Mod, Cast and Pad
* Removed unused variable
* Disabled scan and added conditions for squeeze
* Added changes for fixing all C++ unit tests
* Implements Backend Manager class for caching
Backend Manager provides a layer of indirection between EP interface
and OV backend that provides caching services for models with
symbolic dims in input shapes.
* clean up commented blocks
* clang-formatting
* Read I/O type info from ModleProto
Read the tensor element type information from ModelProto object,
as FusedNode is no longer available.
* code cleanup
* clang-formatting
* Added print statement for jenkins
* Disabled some python tests
* Changed the path of convert fp32 to fp16 hpp
* Added conditions for BatchNorm in GetCapability
* Fixed failed tests
* Revert "Added conditions for BatchNorm in GetCapability"
This reverts commit c3c28c3b00d27892c42546b35dacdd807a48ee90.
* Added Intel to onnxruntime backends
* pick up vars set by OV package setupvars.sh
* Added conditions for Identity
* remove a few cout prints
* Added conditions for GPU_FP32 unit tests
* Revert "pick up vars set by OV package setupvars.sh"
This reverts commit 8199e029c03eae21a1a7ef6bfdc93d00e5d0198b.
* Commented out fatal message for protobuf
* Might need to be removed
* Add interface class for current backend
* moved common logic to base class
* simplified cpu backend
* Removed unused headers
* use vectors to save i/o tensors for windows compatibility
* move utils fxns to backend_utils namespace
* rename ov_backend to ibackend
* Factory pattern for backend creation
* rename CPU backend to Basic backend
* renamed to vad-M and added to factory list
* Added conditions for VPU
* Added print statements
* Changed the logic for checking for symbolic shapes
* Modified logic for zero dimension check
* Removed VPU single dimension condition
* Removed comments
* Modified logic in DimensionCheck method
* Remove legacy OpenVINO EP
Remove all the legacy code for OpenVINO EP. UEP code will take its
place going forward.
This change does NOT remove OVEP files in the following areas asa
they will be reused by UEP:-
1. Documentation: All .md files
2. Docker releated files
3. Python bindings
4. Java bindings
5. C# bindings
6. ORT Server
7. CI pipeline setup files
* Rename Intel EP to OpenVINO EP
* Added unique names to the subgraphs
* Removed subgraphs with only constant inputs
* Modified subgraph partitioning algorithm to remove const input subgraphs
* Apply suggestion to onnxruntime/core/providers/openvino/openvino_execution_provider.cc
* Tracking output names to fix the output order bug
* Changed output names to a unordered map
* Modified logic to check for symbolic input shapes
* Fixed a bug in Reshape check
* Added empty model path to Model constructor
* Made necessary changes to cmake to build from the binary package
* Changed INTEL_CVSDK_DIR to INTEL_OPENVINO_DIR
* Enable dyn device selection with C++ API
* Added Round operator to unsupported list
* Modified subgraph partition logic for MYRIAD
* Removed supported ops from the list
* Enable dyn dev selection in Py API's
* Add documentation for dynamic device selection
* Use MYRIAD || HDDL instead of VPU
* Removed temporary cast of Int64 to FP32
* Disabled unit Tests for CPU_FP32 and GPU_FP32
* Removed default "CPU" from unit tests to allow overriding
* Removed ops Concat, Squeeze, Unsqueeze from unsupported list
* Get the device id from info
* Removed overwriting device_id and precision
* Enabled ConvTranspose and EyeLike
* Reordered unsupported ops in alphabetical order
* Fixed syntax error
* Fixed syntax error
* Code clean-up: Handle exceptions, logs and formatting
Code formatted according to ORT coding guidelines.
* remove debug print from pybind code
* updated docs with ops and models
* formatting prints
* Added default values for c and j for openvino
* Overriding the values set for c and j to be 1
* BACKEND_OPENVINO should be empty if openvino is not in build
* Overriding c value with default for perftest
* fix VAD-M device string bug
* Add IE error details to exceptions
* Use IE specific device names in EP
* Add VAD-F (FPGA) device support
* Removed unecessary libraries from whl package
* Code changes for Windows compatibility
* Add VAD-F option to python API
* [revert before merge] cmake changes for RC
* Enable Windows build in CMake
* Unset macro OPTIONAL for windows builds
inference_engine.hpp's include chain defines a macro 'OPTIONAL'
which conflicts with onnx project's headers when using MSVC. So
would need to explictly unset it for MSVC.
* Use a single copy of plugin/IE::Core
Defined as a static member in Backend manager
* Remove restriction of single subgraphs for myriad
* Passed subgraph name to Backend to enhance log statements
* Disabled zero dimension conditions
* Disabled concat to remove zero dims
* Enabled building ngraph as part of ORT
* Removed serializing and added versioning
* Fix CPU_FP32 unit tests
* Removed unecessary condition
* add ngraph.so.0.0 to .whl
* Check for zero dimensions only for inputs and outputs
* Restrict loading only 10 subgraphs on myriad
* Build ngraph.dll within UEP. Doesn't link yet
* Rename Linux included libngraph.so to libovep_ngraph.so
Renames locally built libngraph.so containing ONNX importer to
libovep_ngraph.so in order to avoid linkage conflicts with
libngraph.so supplied by OpenVINO binary installer.
Applies only for Linux builds.
* use output_name cmake properties for lib name
* fix .so name format in lib_name.patch
* CMake code cleanup
* Rename WIN32 included ngraph.dll to ovep_ngraph.dll
To avoid conflict with ngraph.dll distributed by openvino.
* Added myriad config for networks without 4 dimensions
* Loading the 10 max clusters for inference on myriad
* Refactor code and add Batching support
Encapsulate subgraph settings into context structs.
Add batching support for completely supported models.
* Disabled some broken tests
* use input_indexes to avoid batch-checking initializers
* Avoid static initialization order error on WOS
* Added candy to broken tests
* InternalCI changes for 2020.2
* Updated DLDT instructions
* Unsaved changed in install_openvino.sh
* Changes after manual check
* Remove custom ngraph onnx_import build for WOS
ONNX Importer on WOS does not have protobuf issue.
* Remove FP32ToFP16 ngraph pass
This conversion is performed implicitly within IE.
* Surround debug logic by #ifndef NDEBUG
* remove invalid TODO comments
* removed references to ngrpah-ep
* clang-formatting
* remove commented code
* comment edits
* updating copyright year to that of first OpenVINO-EP release
* remove redundant log msg
* Modified operator and topology support
* Update build instructions
* doc formatting
* Fixed clip unit tests
* Revert "Remove FP32ToFP16 ngraph pass"
This reverts commit ec962ca5f315a5658ad980e740196f19de2639c1.
* Applying FP16 transformation only for GPU FP16
* Fixed GPU FP32 python tests
* automatically use full protobuf
* disable onnxrt server for now
* Disabled upsample
* update dockerfile instructions
* Removed MO paths and added ngraph path
* Remove OVEP from ORT Server docs
Will put it back in after validation
* Updated path to Ngraph lib
* Disabled Resize and some other python tests
* Removed unnecesary header files
* Use commit SHA to fetch ngraph repo
* Avoid un-needed file changes due to version update
* Fixed clip tests
* Fixed Pow, max and min onnx tests
* build.md doc typo
* Update cmake patch command for ngraph src
* remove dead cmake code for onnxruntime_USE_OPENVINO_BINARY
* use spaces instead of tab
* remove commented code
* Add info about protobuf version
* edit debug env var and enable for WIN32
* specify only version tag of 2020.2 for dockerbuilds
* remove unnecessary file changes
* Pass empty string as default argument to C# tests
* Use ${OPENVINO_VERSION} to name openvino install directory in CI builds
* Enabled unnecessarily disabled tests
* Fixed ngraph protobuf patch
* Fixed error in protobuf patch
* Revert "Use ${OPENVINO_VERSION} to name openvino install directory in CI builds"
This reverts commit 89e72adb8bf3b9712f5c81c5e13fe68c6c0df002.
* Remove unsetting OPTIONAL macro
This is no longer used in recent ONNX update onnx/onnx@da13be2,
so this unset workaround is no longer necessary.
* Use a null string default argument for C# API
* Set OpenVINO version yml files and pass to CI Docker builds
Git Tag info for DLDT as well as install directory are set
using this value.
This reverts commit 9fa9c20348ed72ae360a95c98e9b074d2f9fafc5.
* Documentation: recommendation and instructions for disabling ORT graph optimizations
* more doc updates
* Reduced the number of models according to CI time constraints
Co-authored-by: ynimmaga <yamini.nimmagadda@intel.com>
Co-authored-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
Co-authored-by: Mikhail Treskin <mikhail.treskin@intel.com>
Co-authored-by: mbencer <mateusz.bencer@intel.com>
Co-authored-by: Aravind <aravindx.gunda@intel.com>
Co-authored-by: suryasidd <48925384+suryasidd@users.noreply.github.com>
* Migrate winml to Microsoft Namespace (packaging changes are pending)
* add ns_prefix toggle
* fix packaging
* Users/sheilk/add missing raw header (#3484)
* add dualapipartition
* wrong variable for repo root
Co-authored-by: Sheil Kumar <sheilk@microsoft.com>
* remove existence check to force failures
* extra paren
* dualapipartition needs to be referenced from the source
* add microsoft.ai.machinelearning.dll to the output dir
* rename the idl file so that assembly info is correctly added into the winmd
* fix namespaces
* update namespaces
* default to microsoft, and add namespace override as build argument
* update cmakesetings.json as well
* remove from cmakelists.txt
Co-authored-by: Sheil Kumar <sheilk@microsoft.com>
Co-authored-by: Changming Sun <chasun@microsoft.com>
* update C# API to optimize inference latency
* rename PinnedOnnxValue to fixedBufferOnnxValue and fix build break
* add more test cases
* add conditions on string tensors for pre-allocated outputs
* change to random inputs
* fix word spell
* resolve comments
* resolve comments
* remove FixedBufferOnnxValueTests.cs
* fix trivial typos in doc
We want to implement SoftmaxCrossentropy and NegativeLossLikelihoodLoss forward training ops for opset-12 but that requires ONNX submodule to point to the latest commit to have the latest and greatest ONNX spec!
- Reverse integrate changes from *.in.proto files in github ONNX repo.
- Regenerate csharp/test/Microsoft.ML.OnnxRuntime.Tests/OnnxMl.cs
- Disable ONNX tests that don't have op implementation for the latest opset.
* add dml gpu pipelines
* add x86 to the gpu dml dev build pipeline
* Enable DML x86 builds
* Fix uint64_t -> size_t warning
* fix warnings
* enable dml on x86 ci builds
* operatorHelper 773 error uint32_t vs uint64_t
* operatorHelper 773 error uint32_t vs uint64_t
* make x86 pipeline use the gpu pool
* more warnings
* fix x86 directml path
* make dml nuget package
* disable tf_pnasnet_large
* disable zfnet512
* make validation use wildcards
* disable x86 dml gpu tests
* add args.
* update gpu.yml
* change nupkg wildcard
* add debug statements
* package x86 dml nupkg
* dont drop managed nuget again from dml pipeline build
* Add DML EULA
* directml license should be renamed to not clobber the existing license
* casing on dml package....
* {} to ()
* fix license name
* disable dml from x86 ci
* typo and cr feedback
* remove featurizers
* ship the dml pdb as well
* Add a summary for each ExecutionProviderAppend methods in SessionOptions.cs
OnnxRuntime managed dll is EP agnostic meaning it will expose all methods pertaining to all possible EPs supported by OnnxRuntime in general. Not all these methods are really "available" to use for a .NET developer unless they have the correpsonding native onnxruntime shared library. Adding a summary line so that intellisense points that out.
* remove empty line
* Initial commit
* More changes
* More changes
* More changes 3
* More changes 4
* More changes 5
* More changes 5
* More changes 6
* More changes 7
* More changes 8
* Remove C# ifdefs
* More changes 10
* More changes 11
* YAML changes for other release pipelines
* Add release notes metadata
* Props and Targets change
* Add CSHarp proj
* More changes 12
* More changes
* Minor fix
* Minor fix
* Fix yaml
* Some missing logic for winml
* Minor update
* Fix casing for winmd file
* Fix casing
* Add targets and props for managed section into native nuget
* revert file
* a