* Bump onnx to latest
Update onnx.in.proto with changes for SparseTensor.
* add temp skip tests
* remove passed tests from skip list
* skip more tests for new ops in opset 11
* skip crashing tests
* update handling of new attribute types sparse tensor and sparse tensors
* advance onnx commit and remove skip cpu_flaky_tests
* temporarily skip yolo3 model test due to resize opset10 shape inference regression
* update proto for onnxruntime server
* advance onnx commit further
C/C++ Opage APIs
Add new virtual interfaces for NonTensorType
Implement entry points.
Add shared header for the data container.
Add export symbols.
Add serialization/deserialization.
Implement model with Opaque types.
Rework opqaue_api_test as a standalone executable.
* Mention OrtCreateSessionFromArray in C API doc
* Add GetDataTransfer() interface in the EP.
* Check return status of RegisterDataTransfer
* Address PR comments
* Rework the feed/fetch copy setup so that it can be calculated upfront by the control flow nodes. Also simplifies how it all works.
Update the control flow nodes to do the calculation prior to graph execution.
* Implement Nuphar execution provider
Nuphar execution provider is a TVM-based compilation provider. It has shown great speedups for RNN models using Scan.
This PR is mainly for a preview of the shared codegen library for other TVM-based providers.
* Fix submodules
* Fix TVM submodule
* Update Nuphar to latest and resolve confliction
* Remove stale files caused by merge -X theirs
* Revert heap buffer change to not introduce onnxruntime_framework into onnxruntime_perf_test
* Fix bad merge
* Merge from Nuphar
* Fix warning treated as error, revert some unnecessary changes
* Revert some more test changes
* Some more test revert or comments to make review easier
New tests could be added later
* One more revert of unnecessary changes
* More change revert. Test could be added back later.
* Mention OrtCreateSessionFromArray in C API doc
* Don't create the default allocator every single time. Rename API accordingly.
* Don't create the default allocator every single time. Rename API accordingly.
* updates...
* updates...
* PR comments
* fix typo in license header
* fix build
1.Let mlas use session thread pool
2.Remove onnxruntime_USE_MLAS cmake option
3. Remove the win32 thread pool code inside mlas
mlas will:
1.use ort thread pool if it get passed in
2.use openmp if the threadpool parameter is nullptr
3.run single threaded if the threadpool parameter is nullptr and openmp is disabled.
Added Sample Featurizer and Infrastructure
Make featurizers and unit tests compile and run with GTest.
Create definitions for the first featurizer kernel.
Add new operator domain.
Create datetime_transformer kernel and build.
Move OPAQUE types definitions for featurizers kerneles out to a separate cc.
Register them with the type system.
Provide unit tests for new AutoML DateTimeTransformer kernel.
Make necessary adjustments to the test infrastructure to make it run
with new types.
- Added python script for generating markdown doc from the registered opkernels.
- Made some conditional changes in the pybind to expose necessary python API
- Added some missing type-constraints in the op kernel registrations
* Mention OrtCreateSessionFromArray in C API doc
* review changes
* use enum for graph optimization level
* Use explicit values for enums
* updates...
* Add friendly enum for graph optimization levels in C, C# and Python APIs.
* Fix linux build
* Fix build breakage due to master merge
* PR comments
* Mention OrtCreateSessionFromArray in C API doc
* Fix perf test executable due to removal of certain C APIs
* fix linux build
* Avoid duplication
* Fix mem leak
* remove memory copy between CUDA and TRT
* add info to RegisterExecutionProvider input
* use new IDeviceAllocator for trt allocator
* remove SetDefaultInputsMemoryType from TRT EP
* remove onnx-tensorrt 5.0
* add submodule onnx-tensorrt branch 5.1
* remove redundancy
* Update transformer_memcpy.cc
* Update tensorrt_execution_provider.cc
* switch to TensorRT 5.1.5.0
* update python binding
* disable failed test case on TensorRT
* Update activation_op_test.cc
* upgrade to TensorRT container 19.06
* update according to feedback
* add comments
* remove tensorrt allocator and use cuda(gpu) allocator
* update onnx-tensorrt submodule
* change ci build cuda directory name
* A few performance improvements:
- Make the iteration in NonZero more efficient by using a raw pointer and simplifying the increment logic
- add another unit test to check the new logic works with 3 dimensional tensor
- gains about 2% for ssd_mobilenet
- Avoid floating point operations on each iteration on Concat
- about 0.5% for ssd_mobilenet and ssd_resnet34
- Put common case first in ExecutionFrame::AllocateAsPerAllocationPlan to avoid unnecessary call to IsSparseTensor
- about 0.05% for ssd_mobilenet
- Minor tweak to put some ctors in the TensorShape header so they can be inlined more easily
* If there is an outer scope value that matches a subgraph input, don't create an implicit input from the outer scope value.
Minor unrelated change for issue noticed while debugging: Use unordered_set for implicit inputs so we don't add them multiple times.
* Add unit test based on onnx issue.
* Add string attribute interface for C API.
* Add string attribute interface for C++ API accordingly.
* Update comment to say that string is also valid
* Use INFO instead of WARNING for an unused graph input.
* Drop severity of unused initializer as well
* Update to output a warning level message if removing an initializer that is never used, and an info level message if removing an initializer that optimization has made redundant.
* Now that we check for a constant initializer in an ancestor graph we also need to be able to retrieve and replace that initializer.
Add helpers to do so.
Update optimizers to use the new helpers.
Fix bug in UnsqueezeElimination where it wasn't checking if the initializer it was replacing was constant.
This change integrates the NCHWc support recently added to MLAS into ONNX Runtime. When using "-o 3" optimizations, then the runtime will do a NCHWc layout optimization pass to convert standard ONNX operators such as Conv/MaxPool to the com.microsoft.nchwc domain with weights and biases reordered for speed.
Description:
Disallow overriding an initializer via a graph input if the IR version is < 4. This enforces an implicit assumption that initializers should be treated as constant, and allows constant folding to be done on a model with an older IR version.
Separate constant and overridable initializers so that it's clear which ones constant folding can utilize.
Update Graph to not add all initializers to the graph inputs when the graph is manually created (i.e. not loaded from a GraphProto) and the IR version is >= 4.
Motivation and Context
In order to do constant folding we need to know which initializers can be treated as constant and which are overridable. All initializers were required to have a matching graph input prior to IR version 4, technically making all of them overridable. The intention however was for them to be treated as constants, and this change enforces that intent.
The benefit of doing so is that constant folding will work for models with IR version < 4. The cost is that if someone is actually overriding an initializer they will need to update the IR version of their model to version 4 in order to keep doing so. The belief is that this is a very small subset of usage (e.g. models involving feeding in a truncated sequence) and the cost to update that small subset is warranted by the benefit of constant folding being able to be enabled on all older models without them needing an IR version update.
* init
* Update DNNLibrary
* Update DNNLibrary, set compiler flags, it compiles now
* Add more missing flags, add test
* Update DNNLibrary
* Update Compile method, fix allocator and some other bugs
* Update DNNLibrary
* Implement CopyTensor
* Not delete state explicitly since it is managed by unique_ptr
* Add the missing files when SingleUnitTestProjct is ON
* misc changes
* Fix wrong name in provider factory
* Add my own test
* Update the code of add node into graph, and add the missing initializer into graph
* Fix the bug that re-build the graph produces extra output
* Update DNNLibrary
* Transpose nchw (ONNX) -> nhwc (NNAPI)
* Add license
* Add GetSupportedNodes method (implement it later)
* Rename onnxruntime_nnapi_test->onnxruntime_nnapi_squeezenet_test
* Update squeezenet_test.cpp after rebase master
* Remove squeezenet_test.cpp since it is almost same with the c++ sample
* Update DNNLibrary for GetSupportedNodes
* Update GetSupportedNodes
* Revert "Remove squeezenet_test.cpp since it is almost same with the c++ sample"
This reverts commit a97575fd9ff49e50ba1dc8d8154790d8cd86c48d.
* Update DNNLibrary
* Fix multiple outputs bug
* Remove GetKernelRegistry
* Revert "Revert "Remove squeezenet_test.cpp since it is almost same with the c++ sample""
This reverts commit 2a0670e9cbf10ea654111ce39e198a4be0ddd838.
* Set default memory type of NNAPI EP
* Add CPUOutput allocator
* Update DNNLibrary for multiple outputs
* Fix bug of nhwc->nchw
* Remove GetExecutionHandle()
* Initial commit for OpenVINO Execution Provider
OpenVINO Execution Provider provides the interface for ONNX Runtime
applications to access Intel's hardware accelerators using Intel's
OpenVINO Toolkit.
* Fixed bug in GetCapability to disable custom ops
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Added OPENVINO ci pipeline
Added new pipeline for openvino provider,
made changes to support the docker build and
onnxruntime build with openvino.
Signed-off-by: Luis Daniel Castellanos <luis.daniel.castellanos@intel.com>
* Enabled all unit tests for OpenVINO EP
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Fixed syntax issue in run_docker_build.sh file
* Added missing default OPENVINO_VERSION
Default value for OPENVINO_VERSION env was
missing causing the build to fail
* Added install Model Optimizer deps step
* Fixed python unit tests and some tests from onnx_backend_test_series
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Fixed indentation bug
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Disabled some of the python backend tests for OpenVINO
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Disabled some model tests
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Remove Duplicate checks for openvino in build.py
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Modified GetCapability for FP16
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Disabled GPU FP32 tests that are not supported
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Convert modelProto to string and use it in compile
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Pass byte-array input args to MO
* Serialized ModelProto passed in-memory to MO
ModelOptimizer python module receives the serialized ModelProto
in-memory.
Uses appropriate ONNX function to load the serialized bytes.
* Make Py_Finalize compatible with older python versions
Also, remove pFunc unassigned variable possibility.
* Fallback if input dims of Matmul is greater than 2
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* fixup: Device #define syntax
* Updated the documentation
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Enable dynamic dim value
* removed commented out code
* Added Dockerfile for openvino EP
Updated instructions on dockerfiles/README.md file
Signed-off-by: Luis Daniel Castellanos <luis.daniel.castellanos@intel.com>
* Disabled fp16_inception_v1 test
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Code formatting with clang-format
Uses style from the .clang-format file in root directory.
* fixup: docker tag and build error fixes
* Heuristics to automatically detect batching
Distributes slices from batch into parallel infer-request objects.
* Handle disabled tests in GetCapability
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Disabled average pool and max pool if ceil_mode is 1
Also dilations are not supported if they are greater than 1
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Disabled Unsqueeze int32 test
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* changes to fix output results bug
* Disabled a few C++ unit tests for MYRIAD FP16
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Manually revert '9fe162bb Enable dynamic dim value'
Reverts compile time setting of dynamic shape
Reverting manually due to significantly huge auto-revert conflicts.
* Fixed unused variable warning
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Disabled Mul test for GPU_FP16 due to accuracy issue
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* VPU documentation update
* Disabled inception_v1 for MYRIAD and HDDL
*Also disabled few C++ accuracy tests for HDDL
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* updates from upstream
* use the new CustomOpApis for I/O interfacing
* Pass initializers as subgraph meta-def inputs in GetCapability()
Requirement due to API changes introduced with PR# 1019.
* Remove obsolete functions
* Save indexes of graph inputs from fused_node info
Both inputs and initializers are passed as data inputs to the
infer function. To identify only inputs among them, save thier
index info from fused_node in Compile function.
* Documentation changes to enable VPU
* Fix VPU related changes in documentation
* Fix minor changes in documentation
* Fix VPU related changes in documentation
* Use Node.In/OutputDefs() to track graph inputs and outputs.
Don't use graph_viewer's GetInputs() or
GetInputsIncludingInitializers().
* Permit "SAME_UPPER" auto_pad attribute from MaxPool
* Disabled fp16_tiny_yolov2 in onnx model tests
* Updated documentation to include configuration guides for myriad and hddl
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Use 8 Infer requests only for VAD-R
* disable debug prints
* Clang-format source files
* Updated BUILD.md with OpenVINO R5 links
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Disabled same upper python tests
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Update test exclusion syntax
* Change path of install_onnx.sh
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Disable tiny_yolov2 in broken tests
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Revert "Change path of install_onnx.sh"
This reverts commit ba9db165f3be430f2aff1ef413299ed04637196a.
This change is only required for Intel internal CI pipeline until
the settings are matched with the upstream's CI pipeline.
* Added debug statements for debugging CI error
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Add --build_wheel to linux openvino pipeline
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Added -v option to onnx_test_runner for debugging
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Removed path change patch
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Added -c 1 to onnx_test_runner
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Refactor MO python invocation in separate function
Cleans up Model Optimizer python invocation check and conversion
logic. Invokes MO only once in GetCapability() and passes the
IR strings (xml and bin) to the Compiler as meta-def attributes.
* Add comments
* code cleanup and comments
* Code cleanup for GetCapability
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Removed unnecessary files
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Revert "Added -v option to onnx_test_runner for debugging"
This reverts commit d1dd70938a94d648df1a1dbbc2e48d0b97e49ec8.
* Revert "Added debug statements for debugging CI error"
This reverts commit b86d41afed2aa29c3508155d6f9c8d3a7263cc60.
* incorporate Status Code changes
* ComputeFunc returns Status::OK() on success
* Use test names to disable tests for MYRIAD and VAD-R
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Rename local identifiers from CNNNetwork to OpenVINO network
CNNNetwork is an OpenVINO's API class that represents more than
just convolutional neural networks (CNNs). Renaming helps to avoid
confusion that the API's only support CNN type models.
* Added error message if building on windows
* Removed duplicate option in Cmake
* Removed unnecessary parameters in activation_opt_test
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Refactor Map search and access logic for efficiently and cleanliness.
* use C++ style casts
* Use os.path.join for python directory path operations
* use C++ style casts
* EP classes should use onnxruntime namespace
* Clean up fixes from PR comments
* Don't explicitly shutdown Py interpreter
* Remove debug print statements
Prints will be re-enabled later with a logging mechanism with
debug/verbose printing options.
* Decrement ref counts for used pyObjects
* Restore build instructions for other compilers
Content under the "Using other compilers" section has been
accidentally deleted by a previous commit. Restoring back that
content from the latest upstream repo.
* CMake code cleanup
Code clean up, commenting and formatting of CMake code.
* Don't pass the unused device_info parameter to OpenVINOGraph ctor.
* Add support for multiple I/O data types
Adds support for the following tensor data types for graph inputs
and outputs:
1) float
2) float16
3) int32
4) int16
5) int8
6) uint16
7) uint8
* cleanup setup.py module list definition
* Deduce index of input using tracked input index map
Ignores initializers in case they are ordered before inputs.
* Removed debug statement in MO code
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* PR feedback
* Removed per_sample_tolerance for openvino
* Removed unnecessary disabled tests
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Removed debug function
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Disabled tiny_yolo_v2 due to accuracy issues
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Changed the disabled reason for broken tests
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Disabled Reshape with no input
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Python formatting with Autopep8
* Minor fix for MYRIAD devices
* Added zero dimension check
*Removed setting batch size for the network
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Set the threshold to larger value for MNIST
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Removed setting higher threshold in provider_test_utils
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Check for --use_openvino in python wheel setup.py
Add openvino modules to the setup script for building the wheel
package only for --use_openvino a build option.
* Removed nullptr checks for GetNode()
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
Add ability to set the session and run logger severity via SessionOptions and RunOptions
Inherit severity from the next logger up if logger severity isn't specified in SessionOptions or RunOptions
Expose ability to set default logger severity in python bindings.