* update onnx-tensorrt submodule to trt7 branch
* add fp16 option for TRT7
* switch to master branch of onnx tensorrt
* update submodule
* update to TensorRT7.0.0.11
* update to onnx-tensorrt for TensorRT7.0
* switch to private branch due to issues in master branch
* remove trt_onnxify
* disable warnings c4804 for TensorRT parser
* disable warnings c4702 for TensorRT parser
* add back sanity check of shape tensort input in the parser
* disable some warnings for TensorRT7
* change fp16 threshold for TensorRT
* update onn-tensorrt parser
* fix cycle issue in faster-rcnn and add cycle detection in GetCapability
* Update TensorRT container to v20.01
* Update TensorRT image name
* Update linux-multi-gpu-tensorrt-ci-pipeline.yml
* Update linux-gpu-tensorrt-ci-pipeline.yml
* disable rnn tests for TensorRT
* disable rnn tests for TensorRT
* disabled some unit test for TensorRT
* update onnx-tensorrt submodule
* update build scripts for TensorRT
* formating the code
* Update TensorRT-ExecutionProvider.md
* Update BUILD.md
* Update tensorrt_execution_provider.h
* Update tensorrt_execution_provider.cc
* Update win-gpu-tensorrt-ci-pipeline.yml
* use GetEnvironmentVar function to get env virables and switch to Win-GPU-2019 agent pool for win CI build
* change tensorrt path
* change tensorrt path
* fix win ci build issue
* update code based on the reviews
* fix build issue
* roll back to cuda10.0
* add RemoveCycleTest for TensorRT
* fix windows ci build issues
* fix ci build issues
* fix file permission
* fix out of range issue for max_workspace_size_env
* Initial Commit
* Merged PR 3985217: add onecoreuap_apiset.lib in order to avoid linking against kernel32.lib etc (#2346)
add onecoreuap_apiset.lib in order to avoid linking against kernel32.lib etc and violating our OS layering requirements.
We linked against onecoreuap_apiset.lib in VB so we will continue doing this, but I am still unsure why not to link against onecore instead since that is where we ship. However, since Sheil is the owner of this code we will wait to discuss with him before changing anything.
* Initial changes for layering
* more snipping to get core into ort
* update build instructions to include --build_shared_lib (#2358)
* update build instructions to include --build_shared_lib
* fix line breaks
* Task 23998197: add winml_lib_core into onnnxruntime.dll (#2368)
* Task 23998197: add winml_lib_core into onnnxruntime.dll
* PR feedback
build break on perf_test
* return proper error when the model path isn't found (#2391)
* LearningModelSession is cleaned up to use the adapter, and parts of b… (#2382)
this is a big PR. we are going to move it up to layer_dev , which is still a L3 so we are still safe to do work there agile.
we are going to move this into the L3 so that ryan can start doing intergration testing.
we will pause for a full code review and integration test result prior to going into the L2.
>>>> raw comments from previous commits >>>
* LearningModelSession is cleaned up to use the adapter, and parts of binding are.
* moved everything in the winmladapter
made it all nano-com using, WRL to construct objects in the ORT side.
base interfaces for everythign for winml to call
cleaned up a bunch of winml to use the base interfaces.
* more pieces
* GetData across the abi.
* renamed some namepsace
cleaned up OrtValue
cleaned up Tensor
cleaned up custom ops.
everything *but* learnignmodel should be clean
* make sure it's building. winml.dll is still a monolith.
* model moved over.
everything builds clean.
step !
* weak ref comment
* Layer dev paulm (#2408)
* model moved over.
everything builds clean.
step !
* weak ref comment
* added a wrapper for RoGetActivationFactory to hook back into winml for creating winml objects.
fixes model load.
* Layer dev paulm (#2414)
* model moved over.
everything builds clean.
step !
* weak ref comment
* added a wrapper for RoGetActivationFactory to hook back into winml for creating winml objects.
fixes model load.
* User/xianz/win ml telemetry (#2410)
* add option to enable winml telemetry
* add option to enable winml telemetry
* clean logs while developping
* clean the log of GUID
* compile onnxruntime_common with winml telemetry
* use option for use_telemetry
* rename option winml_use_telemetry to onnxruntime_use_telemetry
* little change
* fixed some lifetime management.
fixed the debug build.
squeezenet passes using winmlrunner for CPU and GPU
* Layer dev paulm (#2423)
* model moved over.
everything builds clean.
step !
* weak ref comment
* added a wrapper for RoGetActivationFactory to hook back into winml for creating winml objects.
fixes model load.
* fixed some lifetime management.
fixed the debug build.
squeezenet passes using winmlrunner for CPU and GPU
* PR feedback.
* Layer dev paulm (#2424)
* model moved over.
everything builds clean.
step !
* weak ref comment
* added a wrapper for RoGetActivationFactory to hook back into winml for creating winml objects.
fixes model load.
* fixed some lifetime management.
fixed the debug build.
squeezenet passes using winmlrunner for CPU and GPU
* PR feedback.
* couple of fixes and coded getmutabledata()
* Layer dev paulm (#2425)
* model moved over.
everything builds clean.
step !
* weak ref comment
* added a wrapper for RoGetActivationFactory to hook back into winml for creating winml objects.
fixes model load.
* fixed some lifetime management.
fixed the debug build.
squeezenet passes using winmlrunner for CPU and GPU
* PR feedback.
* couple of fixes and coded getmutabledata()
* fixed 2 more heap corruptions
* Layer dev paulm (#2426)
* model moved over.
everything builds clean.
step !
* weak ref comment
* added a wrapper for RoGetActivationFactory to hook back into winml for creating winml objects.
fixes model load.
* fixed some lifetime management.
fixed the debug build.
squeezenet passes using winmlrunner for CPU and GPU
* PR feedback.
* couple of fixes and coded getmutabledata()
* fixed 2 more heap corruptions
* Add opset and IR check when loading model (#2413)
* Add opset and IR check.
* Add test case for future opsets.
https://github.com/microsoft/onnxruntime/issues/2371
* fixed map and sequence when passing stl types across the ABI .
found a leak in nvidia driver, but skipped it.
all winmlapitests pass now
* Moved SessionOptions over to the abi
* WinML CI (#2412)
* Pass flags to build/test WinML in CI
* Add initial CMake config for unit tests in WinML
* Set winml_unittests standard to C++17
* Add WinML API tests and port them to googletest
* Install WinML test collateral
* Add LearningModelSessionAPITests ported to googletest
* Fix WinML test files encoding
* Add GPU tests
* Add parameterized test, skip GPU tests
* Enable precompiled header
* Remove unused code and collateral
* Remove brand images
* Add dllload.cpp
* Remove images not used in API tests
* Add LICENSE.md to image collaterals
* Add models with licenses
* Remove FNS Candy tests
* Add API test models
* Add ModelInSubdirectory
* Install collaterals post-build with copy_if_different, split common lib
* fix warnings
* Link to gtest_main
* Register WinML TraceLogging provider on Onnxruntime.dll (#2455)
* Register WinML TraceLogging provider on Onnxruntime.dll
* Add ifdef to make sure trace logging provider has telemetry option when LAYERING_DONE
* No need for ifdef for TraceLoggingOptionMicrosoftTelemetry
* PR feedback
* Move etw registration into lotus environment constructor and deresgister in lotus environment destructor
* Brianma/cpuwinml (#2466)
* allow building winml cpu without dml.
* Brianma/breaks (#2469)
* fix some more breaks
* learning model doesn't need lotusEnvironment and CPU shouldn't include dmlEP headers
* move dml checks out of winml and into the adapter
* better error handling
* Brianma/fi (#2470)
* learning model doesn't need lotusEnvironment and CPU shouldn't include dmlEP headers
* User/xianz/win ml telemetry (#2410)
* add option to enable winml telemetry
* add option to enable winml telemetry
* clean logs while developping
* clean the log of GUID
* compile onnxruntime_common with winml telemetry
* use option for use_telemetry
* rename option winml_use_telemetry to onnxruntime_use_telemetry
* little change
* Add opset and IR check when loading model (#2413)
* Add opset and IR check.
* Add test case for future opsets.
https://github.com/microsoft/onnxruntime/issues/2371
* WinML CI (#2412)
* Pass flags to build/test WinML in CI
* Add initial CMake config for unit tests in WinML
* Set winml_unittests standard to C++17
* Add WinML API tests and port them to googletest
* Install WinML test collateral
* Add LearningModelSessionAPITests ported to googletest
* Fix WinML test files encoding
* Add GPU tests
* Add parameterized test, skip GPU tests
* Enable precompiled header
* Remove unused code and collateral
* Remove brand images
* Add dllload.cpp
* Remove images not used in API tests
* Add LICENSE.md to image collaterals
* Add models with licenses
* Remove FNS Candy tests
* Add API test models
* Add ModelInSubdirectory
* Install collaterals post-build with copy_if_different, split common lib
* fix warnings
* Link to gtest_main
* fix bad merge
* Checking in a staging checkpoint point so that Ryan can work with me in parrallel
* build break.
* Brianma/testfails (#2473)
* add missing ir version to dictvectorizer-string.onnx
* add missing ir version to relu.onnx
* add missing ir version to zipmap*onnx
* add IR version to manually generated models
* remove an unnecessary ifdef dml
* Brianma/windowsai fi (#2475)
* update dockerfiles/README (#2336)
* Make elementwise op run 4 items per thread (#2335)
Description: Describe your changes.
Make elementwise op run 4 items per thread
unroll for loop to leverage ILP
remove unnessary N==0 check inside elementwise GPU kernel
Motivation and Context
Why is this change required? What problem does it solve?
It can improve the performance of GPU elementwise ops. ~2% performance gain on popular NLP bert model.
If it fixes an open issue, please link to the issue here.
* Add CUDA GatherElements kernel (#2310)
* Updates
* Update test
* Update
* Updates
* nits
* PR feedback
* Update
* Update
* PR feedback
* PR comments
* Update
* Fix build
* Fix build
* Nits
* Fix
* Layer Normalization Fusion (#2319)
basic layer normalization transform
* Add FastGelu Cuda Op for Gelu and Add bias fusion (#2293)
* Add FastGelu cuda op
* Add AddBiasGelu for experiment
* Revert "Add AddBiasGelu for experiment"
This reverts commit 5c1ee019858c657e6bb75887265cb85675626e5b.
* Add bias
* Add unit tests
* update comment
* update script
* fix build error
* update coding style
* update for CR feedback
Enable half2 optimization only when cuda arch >= 7.0
* move _Tanh to common.cuh
* implement CPU contrib OP Attention (#2333)
* Remove unused initializer from GraphProto as well as name_to_initial_tensor_ in CleanUnusedInitializers. (#2320)
* Remove unused initializer from GraphProto as well as name_to_initial_tensor_ in CleanupUnusedInitializers.
This means initializers that have been replaced during graph optimizations are not left in the GraphProto when we save an optimized model.
* Handle edge case where a model has an unused initializer with matching graph input by also removing the graph input.
* Use non-const iterators in std::find_if calls to make centos build happy.
* Nuget pipeline changes (#2305)
1. refactor the pipeline, remove some duplicated code
2. Move Windows_py_GPU_Wheels job to Win-GPU-CUDA10. We'll deprecated the "Win-GPU" pool
3. Delete cpu-nocontribops-esrp-pipeline.yml and cpu-nocontribops-pipeline.yml
4. In Linux nuget jobs, run "make install" before creating the package. So that extra RPAH info will be removed
* Cuda Reverse Sequence Op, maping types of same size using same template function. (#2281)
* Set ElementType to String type of node metadata, instead of byte[] (#2348)
* Set ElementType to String type of node metadata, instead of byte[]
* Fix spacing
* Introduce PrimitiveType into a Type System along with an integer constant (#2307)
Improve perf by avoiding GetType<T>() calls. Introduce MLTypeCallDispatcher to switch on Input Type. Add Tensor IsType<T>() fast method.
* Fix/test dim value of 0 handling in a couple of places (#2337)
* Update the CUDA Where implementation broadcasting logic to handle a dim with value of 0.
Add unit test
Also add unit test for unary op with dim value of 0
* Exclude ngraph from Where test with 0 dim.
* Openvino EP R3.1 onnxrt server (#2357)
* onnxrt server with OVEP
* onnxrt server with OVEP
* Update Dockerfile.server.openvino
* onnxrt server OVEP fix reviews
* onnxrt server OVEP fix reviews
* Implement cuda nonzero op. (#2056)
Implement cuda nonzero op.
* Direct use python numpy array's memory if already contiguous. (#2355)
* Direct use python numpy array's memory if already contiguous. This
could greatly improve performance for session with large input,
like big image 1920x1080 fastrcnn, 30~40% speed up could be achieved.
* Add test case enforce contiguous/non-contiguos numpy array as inputs.
* Add helper to create output to minimize binary size. (#2365)
Add ConstEigenTensorMap typedef so we don't unnecessarily const_cast the const input Tensor.
* fix builds enabling onnxruntime_DEBUG_NODE_INPUTS_OUTPUTS (#2369)
* fix builds enabling onnxruntime_DEBUG_NODE_INPUTS_OUTPUTS
* update
* Add Tracelogging for profiling (#1639)
Enabled only if onnxruntime_ENABLE_INSTRUMENT is ON
* test bidaf with nuphar for avx target (#2370)
increase nuphar test coverage a bit
* Fix a bug in TLS refcount that may destabilized CUDA CI (#2374)
* update output size calculation for resize (#2366)
* change how output size is calculated for resize op
* add tests for ver 10 resize
* Extend OneHot CPU kernel to support more types (#2311)
* Extend OneHot CPU kernel to support input int64_t, depth int32_t, output float
* Skip BERT before the test data fix is picked up
* Fix bug with Slice. Need to pass in flattened input dimensions so the initial offset into the input is calculated correctly. (#2372)
* Add opset 11 version of Split to CUDA ops (#2376)
Organize the CUDA ops definitions so all the opset 10 and 11 parts are together (same setup used for CPU ops)
* Layer Norm Fusion Fix (#2379)
* layer norm fusion fix
* Add input shape check in code and unit tests
* Fuse Add + Gelu (#2360)
Implement the transformer to fuse add + gelu
Implement the accurate kernel
* Skip layer norm transform (#2350)
* skip layer normalization transformer
* Another try to stabilize CUDA CI (#2383)
The root cause seems to be failure in CUDA dealloc when tear down. cudaFree return code was ignored before, so should the debug check.
* fix BUILD.md typo (#2375)
build.py: error: argument --config: invalid choice: 'RelWithDebugInfo' (choose from 'Debug', 'MinSizeRel', 'Release', 'RelWithDebInfo')
* Fixed compilation with ngraph (#2388)
* Fix reuse logic in allocation planner. (#2393)
* Fix reuse logic in allocation planner.
* PR comments
* Add helpful comments
* Don't allow reuse across string tensors.
* [NupharEP] Multiple optimizations (#2380)
Fuse transpose into MatMul
Implement Pow and constant scalar simplification
Vectorize ReduceMean
Improve symbolic shape inference
Minor updates for better debugging in fused function name
* Avoid using the default logger in the graph lib and optimizers (#2361)
1. Use the session logger if it is available.
2. Don't disable warning 4100 globally. We should fix the warnings instead of disabling it.
* Change CUDA implementation of Transpose to support all fixed size tensor types (#2387)
* Change CUDA implementation of Transpose to not use a typed kernel so we can support more types with minimum binary size.
Add support for 8, 16, 32 and 64 bit types.
Add unit tests.
Add method so the implementation can be called directly (will be used by CUDA Scan very soon).
* Disable TensorRT for MLFloat16 and int8 unit tests.
* Address PR comment and add support for calling cublas implementation if type is mlfloat16.
* Add opset 11 versions of the existing CUDA operators that had negative axis support explicitly added. (#2398)
* Add opset 11 versions of the existing CUDA operators that had negative axis support explicitly added.
* [NupharEP] force some low/zero cost ops to be inlined (#2409)
* fix cross compile bug (#2415)
* Minor optimization: if a node has already been placed, there's no need to find a kernel for it. (#2417)
* Add Reshape Fusion (#2395)
* Add reshape fusion
* Add some comments
* update comments
* update comment format
* update according to feedback
* update for recent logger change
* fix build error
* (1) Support both input and output edges in find path in graphutils
(2) Add a test case of only one constant initializer of Concat input.
(3) Refactor ReshapeFusion class to allow add more subgraph fusion in the future.
* fix error
* (1) loose constraint on initializer: non constant is allowed for reshape fusion.
(2) Change versions type to vector.
(3) Add logging.
(4) Return false when multiple output edges matched in FindPath. Add comments.
* only allow one direction (input or output) in FindPath
* [NupharEP] Update notebook and docker image (#2416)
Add BERT squad in Nuphar tutorial
Enhance speed comparsion readability
* Fix the issue in matmul_add_fusion (#2407)
Fix the issue in matmul_add_fusion
If Muatmul + Add has shape [K] * [K, N], reset it to [1, K] * [K, N] will make the output shape to [1, N] will also requires a reshape on the output.
Fix: just remove the shape reset to not fuse it.
Add a negative test case for matmul+add fusion
* feat(treeregressor): Update TreeEnsembleRegressor for type support (#2389)
Updates the `TreeEnsembleRegressor` to allow for `double`, `float`,
`int64`, and `int32` inputs to match the upstream specification.
Signed-off-by: Nick Groszewski <nicholas.groszewski@capitalone.com>
* onnxrt server documentation update (#2396)
* Added support for Pad-2 operator in OpenVINO-EP (#2405)
* Add CUDA If operator. (#2377)
* Add CUDA If operator.
Uses CPU operator for implementation.
By adding a CUDA version the inputs/outputs (with the exception of the 'cond' input) stay on GPU, and no other logic is required to avoid a copy to CPU across the control flow node.
* Improved documentation for onnxruntime::utils::SwapByteOrderCopy(), added precondition check.
* Fix the type constraints on CUDA If operator to exclude strings. (#2431)
* add Im2col<uint8_t> (#2438)
* Adjust codegen vectorization width from target (#2439)
* Adjust codegen vectorization width from target
* Add CUDA Scan operator. (#2403)
* Add Scan CUDA op.
Uses CPU implementation for logic.
Added some device specific functors for handling when data needs to be manipulated on a different device.
Added ability to override the materialization logic in the OrtValue slicer so DML can plugin their handling.
* Fix Windows GPU C API packaging pipeline failure (#2440)
Fix Windows GPU C API packaging pipeline failure (#2440)
* Correctly handle implicit inputs for fused nodes (#2390)
* Correctly handle implicit inputs for fused nodes
Previously, nuphar's partitioning function didn't include
node's implicit inputs into the inputs list of MetaDef, and hence
a crash was triggered in the onnx graph checker.
This commit fixed the issue. Furthermore, it also fixed a related
issue where we didn't add implicit inputs into
graph_inputs_excluding_initializers_ in Graph::SetGraphInputsOutputs.
the issue was that graph_inputs_including_initializers_ populated by
SetInputs (e.g. called by FunctionImpl::FunctionImpl) may contain
implicit inputs which were not of any node's initializers in the graph.
Because they were not part of any initializers, these implicit inputs
couldn't be visited by going through all nodes' inputs.
Consequently, they would *not* be added into graph_inputs_excluding_initializers_.
We fixed the issue by first copying the populated graph_inputs_including_initializers_
into graph_inputs_excluding_initalizers_, which then had both initializers and
non-initializers as its initial content. Later, we erase initializers from the
list. In this way, we can ensure all implicit inputs to remain in
graph_inputs_excluding_initializers_.
* refined comments and fixed duplicates
Address CR by revisiting comments in terms of implicit inputs
Also fixed an issue by skipping duplicates while copying inputs
from graph_inputs_including_initializers_.
* address CR
explain why we need to collect nodes' implicit inputs
* don't rely on pointer values for iterating std::set
Previously, openvino relied on iterating a set of NodeArg pointers
to construct inputs and outputs for a fused graph. It could cause
non-determinism. The reason was that although iterating std::set by
itself is stable, pointer values of NodeArgs may vary. Consequently,
we could end up visiting the set's elements in different orders for
different runs for the same test, which resulted in constructing
inputs (and outputs) with different orders to the fused graph.
For example, for the same test, we may have inputs [A, B] in some
runs but inputs[B, A] in others.
Let's use std::string as the key type to avoid such nondeterminism.
This commit also added implicit inputs into meta->inputs while returning
the capability from the openvino provider.
* Fixed another latent issue in openvino's GetCapability function
The issue was that we couldn't simply erase fused_inputs and fused_outputs
while iterating the nodes. For example, an output NodeArg may have multiple
uses, and it's wrong if we erase it from fused_outputs when we encounter only
one of its uses as input.
* Remove DeviceAllocatorRegistry class (#2451)
Remove DeviceAllocatorRegistry class
* CSharp api and test for loading custom op shared library (#2420)
- Added C-API test for loading custom op shared lib.
- Made some changes in C++ api header and C-api implementation to get it working.
- Added C# API and corresponding test for loading custom op shared library.
* Parallel Gelu with ParallelFor (#2399)
Parallel Gelu to get better performance for Gelu
* Clean up build.py (#2446)
* Pull the latest image before running docker build
* Fuse SkipLayerNorm with Bias (#2453)
Fuse SkipLayerNorm with Bias
* Allow more than one invocation of CreateEnv in the same process. (#2467)
* Allow more than one invocation of CreateEnv in the same process.
* Fix centos build
* Symbolic shape inference improvements: (#2460)
* Symbolic shape inference improvements:
- add a mode to guess unknown ops' output rank
- add support for GatherND
- add support for If
- fix a bug in get_int_values when then tensor rank > 1D, by treating it as no sympy data
- add symbol to literal merge when ONNX silently merges dims
- fix a bug in Concat when input dim is 0
- fix a bug in ConstantOfShape that computed dim is not updated
- add support for dynamic shape in ConstantOfShape
- fix a bug in Loop output shape that loop iterator dim is not inserted at dim 0
- add support for dynamic padding in Pad
- add support for dynamic shape in Reshape
- add support for Resize with opset > 10, by treating output dims as dynamic
- fix a bug in Slice when starts/ends are dynamic
- restrict input model to opset 7 and above
- make output model optional to avoid disk write when testing
Run model tests for symbolic shape inference
Reduce 2GB docker image size of nuphar
* add additional test data set for nuget pipeline (#2448)
* add SAS token to download internal test data for nuget pipeline
* update azure endpoint
* fix keyvault download step
* fix variable declaration for secret group
* fix indentation
* fix yaml syntax for variables
* fix setting secrets for script
* fix env synctax
* Fix macos pipeline
* attempt to add secrets to windows download data
* fix mac and win data download
* fix windows data download
* update test data set url and location
* Revert "Brianma/windowsai fi (#2475)"
This reverts commit 5780b864a1.
* Add scenario tests (#2457)
* Add scenario tests
* Remove TODO from model license
* Add winml_api test dependency
* fix model load test. fi from master changed the constructor (#2483)
* make api tests all pass (#2486)
* fix bad merge
* fix bad model merge
* Layer dev paulm (#2492)
* commetns for dml graph transformer
fixed ort value passing using the allocatir info
* fixed and coded maps and sequences across the abi
* Rename ambiguous header (#2489)
* fix one more missing IR version model (#2500)
* add missing IR version to 4 more models used by scenario tests (#2501)
* Add CLI parameters to test runner, build WinML in ARM and x86 CI (#2479)
* Support test parameters through CLI arguments
* Add WinML do Windows x86/ARM CI builds
* Code style fixes
* Update googletest
Remove GPUTEST macros everywhere now that GTEST_SKIP is supported
* Refactor main.cpp
* Build scenario tests without DML
* Link scenario tests to DML when it's enabled (#2502)
* Layer dev release pipeline (#2488)
Adds winml binaries to existing cpu nuget package, and creates new gpu dml nuget package with winml binaries and DML EP.
* Layer dev paulm (#2506)
* commetns for dml graph transformer
fixed ort value passing using the allocatir info
* fixed and coded maps and sequences across the abi
* cleaned up w4's
cleaned up the model info ABI
delayload directml.dll from winml
* Remove usage of IOBinding in WinML and use C_API Run method (#2504)
* remove usage of iobinding
* Change data structure to use vector of Ort::Values
* Polish bind input / output
* Use C APIrun method
* Update providers on evaluate getresults
* Remove run and IObinding interface from WinMLAdapter
* Remove use of IObinding
* bind unbound outputs code moved to learningmodelbinding
* clean up unneeded istensor adapter function
* Fix comment
* Check if session is closed before binding and clearing
* PR feedback
* Layer dev paulm (#2507)
* commetns for dml graph transformer
fixed ort value passing using the allocatir info
* fixed and coded maps and sequences across the abi
* cleaned up w4's
cleaned up the model info ABI
delayload directml.dll from winml
* cleaned up namepsace aliases.
renamed _winmla to winmla
this was good PR feedback from tiago a while back.
* Make tests dependend on winml_dll (#2509)
* add dml binaries to DirectML package and be more explicit about condition variables (#2520)
* re-enable warnings for winml builds and fix the warnings that were hiding (#2526)
* turn devmode back on for winml builds
* fix some warnings. include protobuf in a way that disables some warnings
* undo protobufhelpers changes and just ignore 4100 errors in pb code
* attempt to isolate protobufhelpers errors
* add template specialization for getting tensor proto data
* Layer dev paulm (#2533)
* commetns for dml graph transformer
fixed ort value passing using the allocatir info
* fixed and coded maps and sequences across the abi
* cleaned up w4's
cleaned up the model info ABI
delayload directml.dll from winml
* cleaned up namepsace aliases.
renamed _winmla to winmla
this was good PR feedback from tiago a while back.
* moved files from inc to lib\api.core
cleaned up some of the cmake
* staged changes
* Spawn child process to run DeviceLostRecovery scenario test (#2530)
* Spawn child process to run DeviceLostRecovery scenario test
* Layer dev paulm (#2536)
ori said yes
* add missing namespace to winml_trace_logging_provider in lotusenvironment.h (#2542)
* Handle exception thrown from all apis in WinMLAdapter (#2539)
* various changes to unblock windowsai ADO build
* Fix custom ops scenario tests (#2562)
* Do not shutdown protobuf after ort environment gets destroyed. Lazy load lotus environment first time it is needed
* comment typo
* pr comment about calling phoenix singleton
* Make lotus_environment static in winmladapter
* Layer dev paulm (#2567)
* commetns for dml graph transformer
fixed ort value passing using the allocatir info
* fixed and coded maps and sequences across the abi
* cleaned up w4's
cleaned up the model info ABI
delayload directml.dll from winml
* cleaned up namepsace aliases.
renamed _winmla to winmla
this was good PR feedback from tiago a while back.
* moved files from inc to lib\api.core
cleaned up some of the cmake
* staged changes
* making windowsAI azure dev ops work.
* code review comments.
* revert changes
* Cmake and preprocessor fixes that where uncovered by building on agents without DML available via SDK
* Layer dev dml delayload (#2580)
* Brianma/cpu (#2583)
* don't include dml stuff in cpu builds
* tests that link the image lib also need the telemetry lib now
* Throw Winml_err_invalid_binding if binding gpu resource on cpu device (#2589)
* Throw Winml_err_invalid_binding if binding gpu resource on cpu device
* PR comments. No need to query executionprovider for is gpu device
* User/xianz/ortthrow (#2596)
* thrown and handle onnxruntime exceptions
* handle exception thrown from ort in winmladapter
* undo changes in error.h
* add message to HRESULT
* User/xianz/ortthrow (#2599)
* thrown and handle onnxruntime exceptions
* handle exception thrown from ort in winmladapter
* undo changes in error.h
* add message to HRESULT
* add status error message
* Remove uwp onsuspending winrt call because logruntimeperf is getting removed (#2630)
* User/xianz/dedup telemetry (#2631)
* investigate duplication of telemetry in winml and ort
* remove winml telemetry events
* telemetry executionProviderEvent
* remove unneccessary file and refactor code little bit
* Revert back TelemetryEvent, which send up ETW event.
* merge changes from layer_dev to windowsai (#2638)
* Remove underscore from googletest names (#2616)
* Fix leaking memory allocator
Fix https://microsoft.visualstudio.com/OS/_workitems/edit/24278761
and https://microsoft.visualstudio.com/OS/_workitems/edit/24330198
* Explicitly initialize Ort::Value with nullptr
* Cache WinML adapter
* bad merge
* define private version of dxcore enum that is added in 19H1 SDK. (#2654)
* add comment for explaning private definition of dxcore d3d feature level ennum value. (#2672)
* do not package directml.pdb for redist packages. (#2676)
* Fix leaking operator registry (#2645)
Fix https://microsoft.visualstudio.com/OS/_workitems/edit/24354916
* User/orilevari/windowsai master merge (#2674)
merge resolutions included pulling in telemetry logic that was merged to master and not windowsai and dereferencing InferenceSession::sessionstate now that it is a unique pointer
* Delete Ort Allocator in LearningModelBinding (#2653)
* Delete OrtAllocator in LearningModelBinding
* PR comments to make Ort::Allocator a smart pointer
* Small comment change
* PR feedback to clean up code
* PR feedback on move semantics
* Clean up std::move
* Fix memory leaks (#2679)
Fix https://microsoft.visualstudio.com/OS/_workitems/edit/24356109,
https://microsoft.visualstudio.com/OS/_workitems/edit/24388361 and
https://microsoft.visualstudio.com/OS/_workitems/edit/24388596
* various changes to properly organize and skip GPU tests. For now for No DML builds we will not run GPU tests at all. In the future we should adapt the tests to expect the appropiate errors. (#2695)
* Windowsai without fi (#2701)
* Disable Attention fusion tests when DISABLE_CONTRIB_OPS is defined (#2529)
* Setup java ci (#2528)
* Add provision in ORT for session options to be parsed when available via model file (#2449)
* Initial commit
* Fix gitmodules
* Nits
* Nits
* Updates
* Update
* More changes
* Updates
* Update
* Some updates
* More changes
* Update
* Update
* Merge
* Update
* Updates
* More changes
* Update
* Fix nits
* Updates
* Fix warning
* Fix build
* Add comment
* PR feedback
* PR feedback
* Updates
* Updates
* Update
* More changes
* Fix build break
* Comment test for now
* Updates
* Updates
* PR feedback
* Updates
* Nits
* Add tests
* Fix build
* Fix build
* Fix build
* Fix build break
* Fix build
* Nits
* PR feedback
* More change
* Expose GetSessionOptions in pybind logic and add unit test for python
* Fix build
* PR feedback
* PR feedback
* Revert "Disable thread pool creation when enabled OpenMP (#2485)" (#2535)
This reverts commit 7c7d5a149c.
* Add dynamic shape support in TensorRT execution provider (#2450)
* remove onnx-tensorrt submodule
* add new onnx-tensorrt submodule (experiment) for trt6
* update engine build for trt6
* update compile and compute for tensorrt6.0
* Update tensorrt_execution_provider.cc
* Update tensorrt_execution_provider.cc
* Update tensorrt_execution_provider.cc
* Update tensorrt_execution_provider.cc
* switch to onnx-tensorrt master for TensorRT6'
* Update tensorrt_execution_provider.cc
* Handle dynamic batch size and add memcpy in TensorRT EP
* update test cases
* Update tensorrt_execution_provider.cc
* update onnx-tensorrt submodule
* Update Dockerfile.ubuntu_tensorrt
* Update Dockerfile.ubuntu_tensorrt
* Update run_dockerbuild.sh
* Update run_dockerbuild.sh
* Update install_ubuntu.sh
* Update concat_op_test.cc
* Update tensorrt_execution_provider.cc
* Upgrade TensorRT to version 6.0.1.5
* Update onnxruntime_providers.cmake
* Update CMakeLists.txt
* Update reduction_ops_test.cc
* Update install_ubuntu.sh
* Update Dockerfile.ubuntu_tensorrt
* Update Dockerfile.tensorrt
* Update BUILD.md
* Update run_dockerbuild.sh
* Update install_ubuntu.sh
* Update onnxruntime_providers.cmake
* Update install_ubuntu.sh
* Update install_ubuntu.sh
* Update gemm_test.cc
* Update gather_op_test.cc
* Update CMakeLists.txt
* Removed submodule
* update onnx-tensorrt submodule
* update header file
* Removed submodule
* add submodule onnx-tensorrt kevin's branch shape-test'
* add debugging code
* Update tensorrt_execution_provider.cc
* Update tensorrt_execution_provider.cc
* merge master
* Removed submodule
* update onnx-tensorrt submodule
* add more changes for dynamic shapes
* Update tensorrt_execution_provider.cc
* update for dynamic shape
* update dynamic shape processing
* fix logger issue
* remove submodule onnx-tensorrt
* add submodule onnx-tensorrt
* add env variable min_subgraph_size
* remove redundency
* update document
* use onnxruntime::make_unique
* fix multi-run issue
* remove some tests to save CI build time
* Add dynamic shape test
* Update TensorRT-ExecutionProvider.md
* Add example of running Faster R-CNN model on TensorRT EP
* Add more details on env variables
* update environment variables
* Update tensorrt_basic_test.cc
* Update model tests
* Update tensor_op_test.cc
* remove --use_full_protobuf
* Update build.py
* User/xianz/telemetry (#2458)
* enabme telemetry
* enable telemetry
* set enable telemetry as default
* for debugging
* remove log and set disable telemetry as default back
* delete private file while testing
* resolve comment: mainly add license header, rename macro and update docs
* rewording in privacy.md
* Fix integer overflow in cuda NonMaxSuppression implementation (#2540)
* add test case that should pass but fail
* fix nms
* extract int_max_output_boxes_per_class
* Introduce container type runtime checks and other improvements (#2522)
Rework TensorSeq in a manner consistent with Tensor and SparseTensor
in terms of type system setup.
Reduce templating. Introduce helpers to ensure the same
data type.
Make OrtValue __dtor not virtual.
Introduce ContainerChecker
* Fix C API tests for centos and mac (#2544)
* change c++14 to c++11
* add ld lib path for centos
* enable csharp tests on macos
* fix C API test on MacOS + fix manylinux dotnet install
* fix manylinux dotnet install
* fix lib link
* Add back executable bit to build.py
* Fix a bug handling negative begin pad values in Pad op (#2550)
* Fix bug in Pad op
* Update
* DNNL CMAKE update (#2548)
* Fix android build (#2558)
* Update win-x86-ci.yml (#2557)
Fix build pipeline break
* Re-enable Windows C# tests (#2564)
* disable onnx_test_runner -x invocations for dnnl (#2568)
* Allow sequence length to be symbolic (#2559)
* setup java ci mac (#2570)
* make layernorm fusion to support opset 11 (#2545)
* Fix a warning found in the latest VS release
* Add more check on SkipLayerNorm and BiasGelu fusion (#2574)
* Fix file not found error during docker build. (#2569)
* Add ConvTranspose1D (#2578)
* Ryanunderhill/packagename test (#2582)
* [Nuphar EP] fixes for some object detection models (#2581)
Update notebook tutorial with multi-threaded int8 GEMM from #2517
* EmbedLayerNormalization Fusion Improvement (#2553)
Embedding layer norm fusion improvements - add more checks
* Update version (#2584)
* Temporarily exclude vgg19 test from Python backend test
1. temporarily exclude vgg19 test which comsumes too much memory, run out of memory on Upsquared device. Single test pass for vgg19, need furture investigation (#2588)
2. Update docker file to decrease the docker image size
* Update docs for Android NNAPI EP (#2586)
* Fix lto bug for protobuf and ubuntu
* add path to build dir before test run (#2590)
* Add missig env variables for mac pipeline test (#2595)
* Fixed an issue in updating realized dims (#2597)
when we update realized dims for scan's output, the sliced axis also
needs to be inclusive, i.e. we should check with "dim >= insert_inclusive_axis",
because the offset in the symbols are based on Scan sugraph.
Otherwise, we would end up with shape mismatch later.
* Java API for onnxruntime (#2215)
* Add support for opset 11 in reshape fusion (#2592)
Support opset verion 11 in reshape fusion
* Rename automl python tools folder to featurizer_ops. (#2593)
* Support opset 11 subgraph of Squad model in Embed Layer Normalization (#2605)
Support opset 11 Squad model that is exported from PyTorch nightly. The embed layer uses Range op which is missed in the transformer.
* symbolic shape inference: fix warnings in GPT-2 model (#2608)
And revise nuphar perf test on BERT squad
* Dump subgraph ID and fused graph ID (#2607)
* Dump subgraph ID and fused graph ID
Dump subgraph ID and fused graph ID for better debugging
* Remove local static fused_count
added a field global_fused_count_ to NupharExecutionProvider class
* EmbedLayerNormalization Fusion For Dynamic Squad Model Opset 10 (#2613)
Support subgraph of SQuAD model exported from pytorch with dynamic input axes
* Allow providers to be set for InferenceSession at construction (#2606)
* Remove unnecessary parameter in some places in GatherElements implementation (#2612)
* Remove unnecessary parameter in some places
* Update
* Update
* Make sure fenced tensor could not reuse other tensor. (#2561)
Fix random error caused by this.
* Improve Embed Layer Norm Fusion for SQuAD with static input shape (#2621)
* fix float16 comparison in initializer (#2629)
* epsilon attribute for layernormalization fusion (#2639)
* removed unnecessary batch file and fix path (#2640)
* Add shape inference to ConvTransposeWithDynamicPads schema (#2632)
* Improve cuda expand() opeator's performance. (#2624)
* Cuda pad optimize when no padding is needed. (#2625)
* Shortcut cuda Pad() when no padding is needed.
* Optimize cuda scatter() on 2D compatible. (#2628)
* Optimize cuda scatter() on 2D compatible.
* Add some comments.
* fix build error for ARM (#2648)
* Improve performance of resize() in Nearest mode (#2626)
Special treatment for 2D, check same size as input image.
And in 2d kernel, template use_expolation.
* Fix memory exception in Layer Norm Fusion (#2644)
* Windows CI changes(#2650)
* Revert "User/orilevari/windowsai master merge (#2674)"
This reverts commit fe26146311.
* Revert "Windowsai without fi (#2701)"
This reverts commit 285d4c85ff.
* Revert "User/orilevari/windowsai master merge (#2674)"
This reverts commit fe26146311.
* Deref unique pointer for session_state
* send shutdown event when dll is unloaded and EvaluationStop, SessionC… (#2704)
* send shutdown event when dll is unloaded and EvaluationStop, SessionCreationStart Events.
* Add EvalutationStart Event
* add comment
* use correct type for for loop (#2755)
* ARM CI (#2759)
* Set ARM agent pool
* Set CMake generator to VS 2019 in ARM
* Use system-wide CMake instead of custom version
Our custom version is too old for VS 2019
* Use DML and build shared lib in ARM CI
* Restore nuget packages in ARM CI
* Disable DML
* Refactor ARM debug/release builds
* Use system packaged Python version
* Remove hardcoded Python path
* Downgrade Python to 3.7 for build
* Remove explicit CMake path
* Fix invalid JSON in cgmanifest.json (#2760)
* Fix cgmanifest.json generating script (#2770)
* Fix protobuf submodule name
* Workaround pygit2 bug
* Remove usage of WHOLEARCHIVE in WinML CMake and add WinMLAdapterFactory (#2726)
* Remove usage of WHOLEARCHIVE in WinMLAdapter CMake and add WinMLAdapterFactory
* PR feedback, no need for dll(export) since using def file
* PR comments
* Small comment in gen_def.py
* User/orilevari/32bit comparison warning (#2800)
* use correct type for for loop
* explicitly specify void for parameters of OrtGetApiBase because the function is defined in c, so when the function is just (), it is interpreted as having an unknown number of parameters. This was causing compiler warning C4276.
* Move winml_provider_factory.h to proper location (#2801)
* Scneario Test : Build Google Test and Taef Test based on preprocessor definition (#2809)
* Add winml macro wrappers on top of google test macros
* change test methods to disabled
* Add custom winml macros for both taef and google tests
* PR comments
* Filter CPU case for IsFloat16Supported (#2802)
* Merge fixes
* CMake cross-generator fixes (#2790)
* Fix compilation w/ non-VS CMake generators
* Fix custom WINMD target in Ninja
* Remove usage of msbuild .targets file
* Fix linking using DML in Ninja
* Automate SDK kit version choice
* Cleanup DML package install
* Fix SDK version detection
* Fix comment
* Revert unittest linkage changes
* Fix latest SDK detection
* Don't link to non-uapcore libraries
* Remove MessageBoxA reference and unused link libs
* Refactor WinMLAPI Tests to build both google and taef test based on preprocessor definition (#2829)
* Add winml macro wrappers on top of google test macros
* change test methods to disabled
* Add custom winml macros for both taef and google tests
* PR comments
* Refactor winml api tests
* Move additional gtest specific macro definition into googleTestMacros.h
* Fix test build break since winml_lib_api needs to be statically linked to tests since winmlp::learningmodeldevice::iscpu() is being used in devicehelpers.cpp (#2837)
* Enforce WINML_TEST_CLASS_BEGIN_* matches w/ a WINML_TEST_CLASS_END (#2841)
* Fix warnings that cause build to fail
* Fix test warnings and delayload linking (#2843)
* Ortmemoryinfo struct changed
* mark the camera scenario test as edgecore because it uses d3d11 (#2852)
* User/orilevari/pipeline fi breaks (#2853)
* remove conflicting artifact names. Decided to stop using drop-nuget-cuda since this may have implications on other dependent pipelines.
* change job name in gpu.yml back to Windows_CI_GPU_CUDA_Dev
* Remove internal libs from tests (#2864)
* Support custom DML in onnxruntime_providers.cmake (#2867)
* Make DML include path global (#2882)
* Make DML include path global
* Add generated cppwinrt headers to winml_lib_common
* Integrate changes to WindowsAI to make ADO Build (#2886)
* Revert "CMake cross-generator fixes (#2790)"
This reverts commit dbe7d97fa1.
* add additional suppress warning in onnx_proto
* ignore /wd4996 warning
* DML execution provider fixes
* Revert "Revert "CMake cross-generator fixes (#2790)""
This reverts commit 1ae7b4bcbc.
* Update func signature of custom op function overloads
* common devicehelpers fixes
* Add pch.h for winml_lib_common
* re-add winml_lib_common_dir/inc to include path for winml_adapter
* User/orilevari/dml redist shared folder (#2890)
* move dml nuget package directory up one level to make it shared between build flavors
* Merge conflict fix
* Revert "Merge conflict fix"
This reverts commit 142fa72cf9ce4344ad717b50b7ea2b8582aadc7c.
* Revert "Merge remote-tracking branch 'origin/master' into windowsai"
This reverts commit 6e2126d46e5e5f564d65da37dd4f70c93dd81165, reversing
changes made to b3f5583dc9249834b947c8ea905f6a98060d5bd6.
* Make winml_test_common free of test macros (#2902)
* Add option to build winml_test_common without googletest specifics
* remove test macros from squeezenet
* comment change
* Make cmake functions to get scenario and api source
* PRcomments about hresult
* Build errors fixed
* Fix cmake variable
* Make winml_google_test_lib to build main.cpp once
* PRcomments
* Don't generate files outside the build root (#2914)
* Don't generate files outside the build root
* Add onnxruntime_EXTERNAL_DEPENDENCIES to WinML
* Add DML depedency on RESTORE_PACKAGES
* User/orilevari/fix yaml merge bugs (#2918)
* Add winml test source parameter into cmake function (#2919)
* Add option to build winml_test_common without googletest specifics
* remove test macros from squeezenet
* comment change
* Make cmake functions to get scenario and api source
* PRcomments about hresult
* Build errors fixed
* Fix cmake variable
* Make winml_google_test_lib to build main.cpp once
* PRcomments
* Add arguments to unittest cmake functions
* remove comment
* Revert "Revert "Merge remote-tracking branch 'origin/master' into windowsai""
This reverts commit ade5abe72a4234fdbc3623093c61c02c6b0bdc26.
* Fix breaks from merge with ORT master
* Brianma/linux (#2917)
* don't include windows.h in cross-plat header
* add default case for switch statement
* signed/unsigned mismatch fix
Co-authored-by: Brian Martin <42186431+martinb35@users.noreply.github.com>
* User/sheilk/winml adapter c api (#2891)
* Create winml adapter c api
* fix build
* make it build
* move adapter into onnxruntime core/session
* entry point not exported
* minor changes
* make model metadata work
* make tests pass
* implement all the model reflection apis on the adapter c abi
* update the new ort interface to create a lotus ennvironment with a logging sink
* start adding ort env
* move all winml code into adapter folder/lib to isolate it
* ensure a single logging manager at a time
* start refactoring session
* refactor session creation interface
* add cpu and dml session option methods to adapter
* finish session init
* stub out interfaces in ort lib to perform similar mechanics of iinference session
* enable profiling, and enable schema override
* update session register graph transformers
* turn back on custom registry for custom ops
* Add sync api
* add last c api stubs
* should build... but all feature values are broken since this is in flight to moving all implementation details into ivalue
* remove ep adapter header
* Implement DML execution provider functions from adapter (#2846)
* Implement DML execution provider functions from adapter
* Use functions in OnnxruntimeEngine.cpp
* make map/sequence type_infos freeable, and start implementing ivalue
* make it build again
* implement value methods
* implement remaining methods
* remove com adapter abi
* check dml session
* cache the allocator on ivalue
* check if resource is cpu/gpu when access its mutable data
* update tensor
* mismatched parentheses
* fix tensor base and binding obj
* it evaluates tensors! sometimes...
* minor fixes
* enable gpu evals
* wrapper all existing winml adapter apis with API_IMPL to try catch (#2854)
* update winml... tensor strings are broken, need to template tensorbase to do different things for strings
* make tensor strings work with 2 copies in/2 copies out
* Fix tensor string and allocator bug
* make maps work again... needs some fixes still
* Make it build!
* enable map inputs
* map outputs
* unbound outputs for sequences and maps
* User/xianz/merge windowsai (#2883)
* Packaging pipeline changes for VS 2019 (#2711)
* Tiny fix to codegen
* Simplify cache implementation and avoid static variables that may carry over between models
* Extend DML kernels (#2641)
* Additional DML operators
* Check unsupported attributes and inputs
* Address PR comments
* Add kernel capability function used for partitioning, and re-enable stride-based int64 support based on value range
* Fix test failures
* Build fix
* PR comments
* Update Nuphar tutorial notebook (#2721)
1. Reflect int8 GEMV improvements for multi-threading from #2696
2. Add notes on multi-threading control using OpenMP
3. Add samples of running multi-isa AOT, and show int8 GEMM differences between AVX and AVX2
4. Add rnn_benchmark example to resolve#1993
* Add schema for new Qops (#2611)
* Add schema for new Qops
* adding shape inference + qlinearaveragepool
* plus review comments
* plus review comments
* updates per review comments
* plus review comments
* [server] Add supposed for model_name and model_version as cli parameter (#2708)
* remove 64bit warning message from python validation. (#2727)
* MLAS: ARM64 build fix (#2734)
fix bad usage of vreinterpret to cast vector element types
* Fix broken python docs links (#2740)
* Fix build on Mac OS (#2731)
mac os ld doesn't support --while-archive, correct option is -all_load
* fix ngraph wheel (#2737)
* fix ngraph wheel
1.1.0 onnxruntime_ngraph wheel doesn't work
* remove libdnnl.so in nGraph Libs
* make it easy to compare
* Split onnxruntime server to a separated folder (#2744)
* Fix build for Python 3.8 (#2747)
* Fix build for Python 3.8
* Update protobuf to 3.11.2 (#1928)
Update protobuf to 3.11.2 (#1928)
* Change default optimization level to All (from Basic) (#2745)
* change default optimization level to All (from Basic)
* fix test
* fix c# test
* Update numpy to 1.18 (#2758)
* Update numpy to 1.18
* Pipeline changes for python 3.8 (#2753)
1. Pipeline changes for python 3.8
2. Fix a regression in setup.py which was just introduced in the previous commit.
Please notice, we still haven't made python 3.8 + Windows + CUDA work.
* Add basic stacktrace output for posix debug builds. (#2749)
* [NupharEP] fix a race condition when multiple sessions running different models concurrently (#2772)
* Revert "Change default optimization level to All (from Basic) (#2745)"
This reverts commit 56bb503c2f.
* Fix typo in error message (#2736)
* Rename MKL-DNN to DNNL to fix broken link (#2730)
* Fix nightly build version number issue
* Pass BUILD_BUILDNUMBER to linux docker
* Disable featurizers in python packages
* Import more featurizers (#2781)
Make kernels non-template. Add input constraint for learnt data.
Add min_max_scalar_transformer, robust_scalar_transformer,
inputation_marker_transfomer, label_encoder_transformer,
missing_dummies_transformer along with tests.
Advance Featurizers library commit.
* Implement a more stable softmax (#2715)
* Implement a more stable SoftMax
e^x is represented as infinity if x is large enough, like 100.f. Infinity divided by Infinity is a NAN. Thus, softmax gets a NAN if one or more item are large enough.
A math transform as below is leveraged to get a stable softmax:
e^xi/(e^x1 + ...e^xn) = e^(xi - max) / (e^(x1 - max) + ... + e^(xn - max))
And for convenience, force max to 0.f if all xi are negative
* Contributing: Fix a typo (#2784)
* ACL EP GEMM improvements (#2780)
When it is posible we use a fully connected layer instead of the gemm implementation.
This will let the library use the best implementation based on the input data.
* ACL EP convolution improvements (#2774)
Added the optimized implementation for depthwise convolution for both ACL v19.02 and ACL 19.05.
Also the pointwise convolution seems to be more optimal in the CPU implementation so we opted for that instead.
* Add script for release Nuget validation (#2719)
* Initial commit
* Nits
* Disable a test temporarily
* Change working directory
* Test
* Add download python step
* Test update
* More changes
* Fix space issue
* Fix
* Verify nuget signing
* Fix
* Spaces
* PR feedback
* Nit
* Fix
* Fix
* Remove temporary changes
* add uint8 support to where op (#2792)
* Improve bert optimization script: (#2712)
(1) Move input int64=>int32 conversion to embed layer fusion.
(2) Output epsilon attribute for LayerNormalization fusion.
* add session creation time cost. (#2798)
* ML.NET team needs featurizers within a package (#2789)
Add auto ml featurizers to Windows, MacOS as well as to GPU packaging-pipelines.
* Initialize max of softmax with lowest of float (#2786)
* MLAS: update SGEMM threading parameters (#2808)
* add interface to copy batch tensors. (#2807)
* add interface to copy batch tensors.
* onnxruntime
* speed up Windows TRT CI (#2811)
* don't run cuda tests if building with tensorrt
* remove unnecessary build options for win trt ci
* refactor win gpu tensorrt ci yml
* --numpy_version=1.17
* update
* update
* azcopy and cuda path
* Update test data (#2356)
* Add timeseries imputer transformer featurizer kernel (#2813)
Make kernels non-template. Add input constraint for learnt data.
Fixup tests.
Add two more featurizers along with tests. Tests fail.
min_max_scalar_transformer
robust_scalar_transformer
Fix tests serialized stream by prepending version bytes.
Add inputation_marker_transfomer and the test.
Fix up float/double type designations.
Added label_encoder_transformer along with a test.
string_throw case is broken at the momement.
Fix labelencodertransfomer_test.cc string_throw case
Rename maxabsscalertransformer_test.cc
Add MissingDummiesTransformer along with the test.
Update manifest.
Add TimeSeriesImputerTransformer definition, implementation and tests
* Fix memory leak in TRT (#2815)
* fix memory leak issue
* revert EP_FAIL on enueueV2
* Add manifest missing comma
* Run static code analyzer on most of our code (#2817)
* Scneario Test : Build Google Test and Taef Test based on preprocessor definition (#2809)
* Add winml macro wrappers on top of google test macros
* change test methods to disabled
* Add custom winml macros for both taef and google tests
* PR comments
* update quantization doc (#2783)
* update documentation for quantization script
* plus some spell corrections
* Filter CPU case for IsFloat16Supported (#2802)
* update default optimization level + fix gemm_activation fusion (#2791)
* update defualt optimization level + fix gemm_activation fusion
* fix typo
* add unit test and incorporate review comments
* fix test comment
* Fix dnnl wheel package name (#2823)
* Append '-dnnl' to whl package name when --use_dnnl
* Update build.py
* Update Ubuntu & TensorRT version in README (#2820)
Dockerfile.tensorrt is using nvcr.io/nvidia/tensorrt:19.09-py3 as base Image, update Ubuntu and TensorRT version according to
https://docs.nvidia.com/deeplearning/sdk/tensorrt-container-release-notes/rel_19-09.html#rel_19-09
* Merge fixes
* Add OneHotEncoder and HashOneHotEncoder kernels. (#2830)
Add defs and imlementation for OneHotEncoders, adjuist date_time_transformer kernel and test.
Add OneHotEncoder kernel test.
Add HashOneHotVectorizerTransformer unit test.
This does not link due to multiple definitions of functions
that are included into header from a CPP file.
* Upgrade gtest to the latest version (#2827)
WinML would like to update the googletest submodule. They want some newer features (namely GTEST_SKIP to skip tests programmatically and be able to skip entire fixtures easily) and would need to update the submodule version.
However, because the new version of code hit a bug in gcc, even though the bug is already fixed in the latest gcc but we're using gcc 4.8.x and it won't get patched for the bug, so we have to do a compromise, change our code a little bit to make it work.
The gcc bug: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=51213
* Add support for int64_t for topk CPU. Fixes github issue #2806. (#2833)
* Ignore allocator type in ExecutionProviders allocator map. Make default initialization of OrtMemoryInfo more clearly invalid. (#2768)
* Remove allocator type from the key comparison in ExecutionProviders.
Remove usage of DummyArena as it's no longer necessary.
* Fix x86 tests where arena allocator is disabled.
Make initialization of OrtMemoryInfo clearer by adding Invalid enum value.
* Make OrtValueNameIdxMap::MaxIdx more intuitive.
* Convert ExternalProject Featurizers into git submodule (#2834)
Add git submodule for Featurizer library.
Update cmake to build for git submodule.
* add domain check for nodes + update documentation (#2831)
* Fix cgmanifest.json generating script (#2770)
* Fix protobuf submodule name
* Workaround pygit2 bug
* User/orilevari/32bit comparison warning (#2800)
* use correct type for for loop
* explicitly specify void for parameters of OrtGetApiBase because the function is defined in c, so when the function is just (), it is interpreted as having an unknown number of parameters. This was causing compiler warning C4276.
* CMake cross-generator fixes (#2790)
* Fix compilation w/ non-VS CMake generators
* Fix custom WINMD target in Ninja
* Remove usage of msbuild .targets file
* Fix linking using DML in Ninja
* Automate SDK kit version choice
* Cleanup DML package install
* Fix SDK version detection
* Fix comment
* Revert unittest linkage changes
* Fix latest SDK detection
* Don't link to non-uapcore libraries
* Remove MessageBoxA reference and unused link libs
* Fix Linux CUDA nuget packaging pipeline break
* Refactor WinMLAPI Tests to build both google and taef test based on preprocessor definition (#2829)
* Add winml macro wrappers on top of google test macros
* change test methods to disabled
* Add custom winml macros for both taef and google tests
* PR comments
* Refactor winml api tests
* Move additional gtest specific macro definition into googleTestMacros.h
* Fix test build break since winml_lib_api needs to be statically linked to tests since winmlp::learningmodeldevice::iscpu() is being used in devicehelpers.cpp (#2837)
* Enforce WINML_TEST_CLASS_BEGIN_* matches w/ a WINML_TEST_CLASS_END (#2841)
* update optimization doc for BERT related fusions (#2819)
* Add bert related transformers to doc
* Add execution provider and comment for bert optimizations
* Add comment about accuracy impact of approximation
* Fix warnings that cause build to fail
* MLAS: enable threading for quantized GEMMs (#2844)
* Fix test warnings and delayload linking (#2843)
* Ortmemoryinfo struct changed
* mark the camera scenario test as edgecore because it uses d3d11 (#2852)
* User/orilevari/pipeline fi breaks (#2853)
* remove conflicting artifact names. Decided to stop using drop-nuget-cuda since this may have implications on other dependent pipelines.
* change job name in gpu.yml back to Windows_CI_GPU_CUDA_Dev
* Remove internal libs from tests (#2864)
* Support custom DML in onnxruntime_providers.cmake (#2867)
* remove old winmladapter cpp
Co-authored-by: Changming Sun <chasun@microsoft.com>
Co-authored-by: KeDengMS <kedeng@microsoft.com>
Co-authored-by: Jeff <38966965+jeffbloo@users.noreply.github.com>
Co-authored-by: Ashwini Khade <askhade@microsoft.com>
Co-authored-by: Andrey <andrey.lompart@gmail.com>
Co-authored-by: George Wu <jywu@microsoft.com>
Co-authored-by: Tracy Sharpe <42477615+tracysh@users.noreply.github.com>
Co-authored-by: Faith Xu <txsafx@gmail.com>
Co-authored-by: zhanyi-ms <zhanyi@microsoft.com>
Co-authored-by: Changyoung Koh <gkcy1019@gmail.com>
Co-authored-by: Scott McKay <Scott.McKay@microsoft.com>
Co-authored-by: Takeshi Watanabe <take-cheeze@users.noreply.github.com>
Co-authored-by: Dmitri Smirnov <yuslepukhin@users.noreply.github.com>
Co-authored-by: Yufeng Li <liyufeng1987@gmail.com>
Co-authored-by: Maher Jendoubi <maher.jendoubi@gmail.com>
Co-authored-by: Andrews548 <32704142+Andrews548@users.noreply.github.com>
Co-authored-by: Hariharan Seshadri <shariharan91@gmail.com>
Co-authored-by: Nathan <7902510+ybrnathan@users.noreply.github.com>
Co-authored-by: Tianlei Wu <tlwu@microsoft.com>
Co-authored-by: Ke Zhang <kezhan@microsoft.com>
Co-authored-by: stevenlix <38092805+stevenlix@users.noreply.github.com>
Co-authored-by: Ryan Lai <ryalai96@gmail.com>
Co-authored-by: Ori Levari <ori.levari@microsoft.com>
Co-authored-by: Yingge WAN <y-wan@users.noreply.github.com>
Co-authored-by: Qing <cwq1913@gmail.com>
Co-authored-by: Pranav Sharma <emailpranav@gmail.com>
Co-authored-by: Tiago Koji Castro Shibata <tiago.shibata@gmail.com>
* move sequence implementation into ort lib... still commented out... need to turn back on...
* begin sequence implementation
* make maps and sequences work
* fix broken tests
* remove dead code
* misc cleanup
* CR feedback
* User/xianz/winml adapter c api (#2869)
* wrapper all existing winml adapter apis with API_IMPL to try catch
* Return HR or Throw for WinML adapter APIs if failed
* undo macro wrapper for two places
* Wrap error macros around ort apis, too.
* address CR feedback #2
* add more api throw/return macros
* Revert changes no longer needed
* revert changes to cxx api
* format winml lib.ort and winml adapter
* remove static pheonix singleton
Co-authored-by: Ryan Lai <ryalai96@gmail.com>
Co-authored-by: Xiang Zhang <xianz@microsoft.com>
Co-authored-by: Changming Sun <chasun@microsoft.com>
Co-authored-by: KeDengMS <kedeng@microsoft.com>
Co-authored-by: Jeff <38966965+jeffbloo@users.noreply.github.com>
Co-authored-by: Ashwini Khade <askhade@microsoft.com>
Co-authored-by: Andrey <andrey.lompart@gmail.com>
Co-authored-by: George Wu <jywu@microsoft.com>
Co-authored-by: Tracy Sharpe <42477615+tracysh@users.noreply.github.com>
Co-authored-by: Faith Xu <txsafx@gmail.com>
Co-authored-by: zhanyi-ms <zhanyi@microsoft.com>
Co-authored-by: Changyoung Koh <gkcy1019@gmail.com>
Co-authored-by: Scott McKay <Scott.McKay@microsoft.com>
Co-authored-by: Takeshi Watanabe <take-cheeze@users.noreply.github.com>
Co-authored-by: Dmitri Smirnov <yuslepukhin@users.noreply.github.com>
Co-authored-by: Yufeng Li <liyufeng1987@gmail.com>
Co-authored-by: Maher Jendoubi <maher.jendoubi@gmail.com>
Co-authored-by: Andrews548 <32704142+Andrews548@users.noreply.github.com>
Co-authored-by: Hariharan Seshadri <shariharan91@gmail.com>
Co-authored-by: Nathan <7902510+ybrnathan@users.noreply.github.com>
Co-authored-by: Tianlei Wu <tlwu@microsoft.com>
Co-authored-by: Ke Zhang <kezhan@microsoft.com>
Co-authored-by: stevenlix <38092805+stevenlix@users.noreply.github.com>
Co-authored-by: Ori Levari <ori.levari@microsoft.com>
Co-authored-by: Yingge WAN <y-wan@users.noreply.github.com>
Co-authored-by: Qing <cwq1913@gmail.com>
Co-authored-by: Pranav Sharma <emailpranav@gmail.com>
Co-authored-by: Tiago Koji Castro Shibata <tiago.shibata@gmail.com>
* missing use_dml check in winml_adapter_session (#2930)
* --use_dnnl flag was mangled in merge (#2931)
* use dml macro not wrapping custom registry code (#2934)
* Disable LNK4199 winml_dll to enable cuda builds (#2936)
* Disable LNK4199 in winml_dll
* linkler->linker
* LearningModelSessionAPITestGpu.CreateSessionWithCastToFloat16InModel should return DXGI_ERROR_UNSUPPORTED when FP16 not supported (#2937)
* Disable LNK4199 in winml_dll
* linkler->linker
* Need to return DXGI_ERROR_UNSUPPORTED when Model does not support fp16
* Publish build symbols (#2939)
* Publish build symbols
* Don't upload PDBs for .exe files
* Make x86 build (#2943)
* fix last remaining size_t/int64_t warnings->errors (#2948)
* TensorString, Sequences and Maps use the first allocator, but should use the cpu default allocator. (#2952)
* fix tensor string allcoator
* clean up default allocator usage for strings in winml lib/api.ort
Co-authored-by: Ryan Lai <ryalai96@gmail.com>
* Handle tensor shape of zero (#2954)
Co-authored-by: Ryan Lai <ryalai96@gmail.com>
* CR feedback (#2970)
* CR feedback
* fix weird formatting on privacy readme
* Add 'All rights reserved.' everywhere
* readd all rights reserved to winml_provider_factory.h
* remove extra space in comment
* remove extra whitespace
* fixes post master merge
* remove winml from nuget gpu pipeline
* set IR VERSION on generated_model in rnn_benchmark (#2972)
* Fix slice conformance failures (#2908)
Co-authored-by: Adrian Tsai <adtsai@microsoft.com>
Co-authored-by: Brian Martin <42186431+martinb35@users.noreply.github.com>
Co-authored-by: Ryan Lai <ryalai96@gmail.com>
Co-authored-by: Paul McDaniel <paul_mcdaniel@hotmail.com>
Co-authored-by: Xiang Zhang <xianz@microsoft.com>
Co-authored-by: Dwayne Robinson <fdwr@hotmail.com>
Co-authored-by: Tiago Koji Castro Shibata <tiago.shibata@gmail.com>
Co-authored-by: Ori Levari <ori.levari@microsoft.com>
Co-authored-by: Jeff <38966965+jeffbloo@users.noreply.github.com>
Co-authored-by: Changming Sun <chasun@microsoft.com>
Co-authored-by: KeDengMS <kedeng@microsoft.com>
Co-authored-by: Ashwini Khade <askhade@microsoft.com>
Co-authored-by: Andrey <andrey.lompart@gmail.com>
Co-authored-by: George Wu <jywu@microsoft.com>
Co-authored-by: Tracy Sharpe <42477615+tracysh@users.noreply.github.com>
Co-authored-by: Faith Xu <txsafx@gmail.com>
Co-authored-by: zhanyi-ms <zhanyi@microsoft.com>
Co-authored-by: Changyoung Koh <gkcy1019@gmail.com>
Co-authored-by: Scott McKay <Scott.McKay@microsoft.com>
Co-authored-by: Takeshi Watanabe <take-cheeze@users.noreply.github.com>
Co-authored-by: Dmitri Smirnov <yuslepukhin@users.noreply.github.com>
Co-authored-by: Yufeng Li <liyufeng1987@gmail.com>
Co-authored-by: Maher Jendoubi <maher.jendoubi@gmail.com>
Co-authored-by: Andrews548 <32704142+Andrews548@users.noreply.github.com>
Co-authored-by: Hariharan Seshadri <shariharan91@gmail.com>
Co-authored-by: Nathan <7902510+ybrnathan@users.noreply.github.com>
Co-authored-by: Tianlei Wu <tlwu@microsoft.com>
Co-authored-by: Ke Zhang <kezhan@microsoft.com>
Co-authored-by: stevenlix <38092805+stevenlix@users.noreply.github.com>
Co-authored-by: Yingge WAN <y-wan@users.noreply.github.com>
Co-authored-by: Qing <cwq1913@gmail.com>
Co-authored-by: Pranav Sharma <emailpranav@gmail.com>
* Add bert related transformers to doc
* Add execution provider and comment for bert optimizations
* Add comment about accuracy impact of approximation
1. Reflect int8 GEMV improvements for multi-threading from #2696
2. Add notes on multi-threading control using OpenMP
3. Add samples of running multi-isa AOT, and show int8 GEMM differences between AVX and AVX2
4. Add rnn_benchmark example to resolve#1993
* Spacing fix for code block
* Update instructions
Include java, acl, and nn api instructions on build page
* Update build instructions to link to build.md
* typo
* Update build instructions to link to build.md
* Include other minor build.md page updates
* Update CUDA version
* Fix dockerfile links
* enabme telemetry
* enable telemetry
* set enable telemetry as default
* for debugging
* remove log and set disable telemetry as default back
* delete private file while testing
* resolve comment: mainly add license header, rename macro and update docs
* rewording in privacy.md
* [NupharEP] Add parallel schedule to JIT function name
Update Nuphar docker to use Python 3.6 and ubuntu 18.04
* Update notebook
* Avoid JIT cache file name conflict
* [NupharEP] Enable parallel schedule
* Update TVM with the fix to TVM threadpool to use OpenMP if possible
* Add parallel schedule when trying to vectorize
With this change, BERT squad perf on a 4-core (8 HT) CPU goes from 187ms to 150ms
* Address CR, docs and cmake update
* Doc fix
* Fix mkl
* Fix TVM windows build when using mklml
* Guard unused parameter
Guard unused parameter for Linux Arm and other cases.
* Add ACL (Arm Compute Library) execution provider
Add a new execution provider targeting Arm architecture based on Arm Compute Library.
Validated on NXP i.MX8QM CPU with ResNet50, MobileNetv2 and VGG models.
All unit tests are passing.
Comparative performance improvements for ResNet50v1 model obtained with
onnxruntime_perf_test:
A72 2xA72 A53 4xA53
ACL vs CPU 16% 9% 21% 13%
Usage documentation available in ACL-ExecutionProvider.
* Fix eigen unused parameter
Fix eigen unused parameter error for Arm cross-compilation.
* Initial draft
* updates per review
* fix link
* plus one more link fix
* small changes to the optimizer documentation
* some more changes
* done
* update C_API with doc link
This change adds a new execution provider powered by [DirectML](https://aka.ms/DirectML).
DirectML is a high-performance, hardware-accelerated DirectX 12 library for machine learning on Windows. DirectML provides GPU acceleration for common machine learning tasks across a broad range of supported hardware and drivers.
The DirectML execution provider is capable of greatly improving evaluation time of models using commodity GPU hardware, without sacrificing broad hardware support or requiring vendor-specific extensions to be installed.
**Note** that the DML EP code was moved verbatim from the existing WindowsAI project, which is why it doesn't yet conform to the onnxruntime coding style. This is something that can be fixed later; we would like to keep formatting/whitespace changes to a minimum for the time being to make it easier to port fixes from WindowsAI to ORT during this transition.
Summary of changes:
* Initial commit of DML EP files under onnxruntime/core/providers/dml
* Add cmake entries for building the DML EP and for pulling down the DirectML redist using nuget
* Add a submodule dependency on the Windows Implementation Library (WIL)
* Add docs under docs/execution_providers/DirectML-ExecutionProvider.md
* Add support for DML EP to provider tests and perf tests
* Add support for DML EP to fns_candy_style_transfer sample
* Add entries to the C ABI for instantiating the DML EP
* Introduce execution mode for clarity and extensibility; Change Python APIs accordingly; Replace DisableSequentialExecution API with EnableParallelExecution for clarity.
* Fix cuda build
* Modify the test slightly
* Make C and C# APIs consistent with Python.
* Fixed a bug of missing tvm in python wheel
* Put Nuphar Python scripts into wheel
* Add note book tutorial
* Some improvements in symbolic shape inference for quantized models
Description: Refine threading control options and move inter op thread pool to session state.
Added thread_utils.h/cc to centralize the decision around the thread pool size under various conditions.
Motivation and Context
Currently the thread pool size of the parallel executor is hardcoded to 32 for some reason. This PR makes the options to configure the thread pool sizes clearer.
* Fix broken link and minor wording updates
* Update links to use relative paths
* Update sample section organization
* Fix a few more links
* Update links to relative paths
* Fix link urls
* Update links to relative paths
* Update link to perf test doc page
* Update links to relative paths
* Update to relative paths for links
* Update link
* Mention OrtCreateSessionFromArray in C API doc
* Fix perf test executable due to removal of certain C APIs
* fix linux build
* Avoid duplication
* Update coding guidelines to prefer using make_unique for heap allocations (unless where not possible).
* Implement Nuphar execution provider
Nuphar execution provider is a TVM-based compilation provider. It has shown great speedups for RNN models using Scan.
This PR is mainly for a preview of the shared codegen library for other TVM-based providers.
* Fix submodules
* Fix TVM submodule
* Update Nuphar to latest and resolve confliction
* Remove stale files caused by merge -X theirs
* Revert heap buffer change to not introduce onnxruntime_framework into onnxruntime_perf_test
* Fix bad merge
* Merge from Nuphar
* Fix warning treated as error, revert some unnecessary changes
* Revert some more test changes
* Some more test revert or comments to make review easier
New tests could be added later
* One more revert of unnecessary changes
* More change revert. Test could be added back later.
* Updates
* Remove preview texts
* Update README.md
* Updates
* Update README.md
* Update README.md
* Minor wording update
* Update README.md
* Update doc on CUDA version
* revert update
* Update readme for issue #1558
* Clean up example section
* Cosmetic updates
- Add a index of build instructions for browsability
- Update build CUDA version from 9.1 to 10
* Fix broken link
* Update README to reflect upgrade to pip requirement
* Update CuDNN version for Linux Python packages
* Clean up content
Updated ordering and add table of contents
* Minor format fixes
* Move Android NNAPI under EP section
* Add link to operator support documentation
* Fix typo
* typo fix
* remove todo section
* Mention OrtCreateSessionFromArray in C API doc
* Update perf tool documentation to reflect the new graph optimization enums. Relax constraint for enable_all.
* Update one more doc
* Update onnx test runner documentation
* Add default in the docs
- Added python script for generating markdown doc from the registered opkernels.
- Made some conditional changes in the pybind to expose necessary python API
- Added some missing type-constraints in the op kernel registrations
* Update version number to 0.5.0 in preparation for release
* Update to README.md to direct to Versioning doc
* Resolve PR comment
* Remove incorrect line generation
* Minor updates to update version script
* Minor comment update
* Initial commit for OpenVINO Execution Provider
OpenVINO Execution Provider provides the interface for ONNX Runtime
applications to access Intel's hardware accelerators using Intel's
OpenVINO Toolkit.
* Fixed bug in GetCapability to disable custom ops
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Added OPENVINO ci pipeline
Added new pipeline for openvino provider,
made changes to support the docker build and
onnxruntime build with openvino.
Signed-off-by: Luis Daniel Castellanos <luis.daniel.castellanos@intel.com>
* Enabled all unit tests for OpenVINO EP
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Fixed syntax issue in run_docker_build.sh file
* Added missing default OPENVINO_VERSION
Default value for OPENVINO_VERSION env was
missing causing the build to fail
* Added install Model Optimizer deps step
* Fixed python unit tests and some tests from onnx_backend_test_series
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Fixed indentation bug
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Disabled some of the python backend tests for OpenVINO
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Disabled some model tests
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Remove Duplicate checks for openvino in build.py
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Modified GetCapability for FP16
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Disabled GPU FP32 tests that are not supported
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Convert modelProto to string and use it in compile
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Pass byte-array input args to MO
* Serialized ModelProto passed in-memory to MO
ModelOptimizer python module receives the serialized ModelProto
in-memory.
Uses appropriate ONNX function to load the serialized bytes.
* Make Py_Finalize compatible with older python versions
Also, remove pFunc unassigned variable possibility.
* Fallback if input dims of Matmul is greater than 2
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* fixup: Device #define syntax
* Updated the documentation
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Enable dynamic dim value
* removed commented out code
* Added Dockerfile for openvino EP
Updated instructions on dockerfiles/README.md file
Signed-off-by: Luis Daniel Castellanos <luis.daniel.castellanos@intel.com>
* Disabled fp16_inception_v1 test
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Code formatting with clang-format
Uses style from the .clang-format file in root directory.
* fixup: docker tag and build error fixes
* Heuristics to automatically detect batching
Distributes slices from batch into parallel infer-request objects.
* Handle disabled tests in GetCapability
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Disabled average pool and max pool if ceil_mode is 1
Also dilations are not supported if they are greater than 1
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Disabled Unsqueeze int32 test
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* changes to fix output results bug
* Disabled a few C++ unit tests for MYRIAD FP16
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Manually revert '9fe162bb Enable dynamic dim value'
Reverts compile time setting of dynamic shape
Reverting manually due to significantly huge auto-revert conflicts.
* Fixed unused variable warning
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Disabled Mul test for GPU_FP16 due to accuracy issue
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* VPU documentation update
* Disabled inception_v1 for MYRIAD and HDDL
*Also disabled few C++ accuracy tests for HDDL
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* updates from upstream
* use the new CustomOpApis for I/O interfacing
* Pass initializers as subgraph meta-def inputs in GetCapability()
Requirement due to API changes introduced with PR# 1019.
* Remove obsolete functions
* Save indexes of graph inputs from fused_node info
Both inputs and initializers are passed as data inputs to the
infer function. To identify only inputs among them, save thier
index info from fused_node in Compile function.
* Documentation changes to enable VPU
* Fix VPU related changes in documentation
* Fix minor changes in documentation
* Fix VPU related changes in documentation
* Use Node.In/OutputDefs() to track graph inputs and outputs.
Don't use graph_viewer's GetInputs() or
GetInputsIncludingInitializers().
* Permit "SAME_UPPER" auto_pad attribute from MaxPool
* Disabled fp16_tiny_yolov2 in onnx model tests
* Updated documentation to include configuration guides for myriad and hddl
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Use 8 Infer requests only for VAD-R
* disable debug prints
* Clang-format source files
* Updated BUILD.md with OpenVINO R5 links
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Disabled same upper python tests
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Update test exclusion syntax
* Change path of install_onnx.sh
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Disable tiny_yolov2 in broken tests
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Revert "Change path of install_onnx.sh"
This reverts commit ba9db165f3be430f2aff1ef413299ed04637196a.
This change is only required for Intel internal CI pipeline until
the settings are matched with the upstream's CI pipeline.
* Added debug statements for debugging CI error
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Add --build_wheel to linux openvino pipeline
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Added -v option to onnx_test_runner for debugging
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Removed path change patch
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Added -c 1 to onnx_test_runner
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Refactor MO python invocation in separate function
Cleans up Model Optimizer python invocation check and conversion
logic. Invokes MO only once in GetCapability() and passes the
IR strings (xml and bin) to the Compiler as meta-def attributes.
* Add comments
* code cleanup and comments
* Code cleanup for GetCapability
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Removed unnecessary files
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Revert "Added -v option to onnx_test_runner for debugging"
This reverts commit d1dd70938a94d648df1a1dbbc2e48d0b97e49ec8.
* Revert "Added debug statements for debugging CI error"
This reverts commit b86d41afed2aa29c3508155d6f9c8d3a7263cc60.
* incorporate Status Code changes
* ComputeFunc returns Status::OK() on success
* Use test names to disable tests for MYRIAD and VAD-R
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Rename local identifiers from CNNNetwork to OpenVINO network
CNNNetwork is an OpenVINO's API class that represents more than
just convolutional neural networks (CNNs). Renaming helps to avoid
confusion that the API's only support CNN type models.
* Added error message if building on windows
* Removed duplicate option in Cmake
* Removed unnecessary parameters in activation_opt_test
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Refactor Map search and access logic for efficiently and cleanliness.
* use C++ style casts
* Use os.path.join for python directory path operations
* use C++ style casts
* EP classes should use onnxruntime namespace
* Clean up fixes from PR comments
* Don't explicitly shutdown Py interpreter
* Remove debug print statements
Prints will be re-enabled later with a logging mechanism with
debug/verbose printing options.
* Decrement ref counts for used pyObjects
* Restore build instructions for other compilers
Content under the "Using other compilers" section has been
accidentally deleted by a previous commit. Restoring back that
content from the latest upstream repo.
* CMake code cleanup
Code clean up, commenting and formatting of CMake code.
* Don't pass the unused device_info parameter to OpenVINOGraph ctor.
* Add support for multiple I/O data types
Adds support for the following tensor data types for graph inputs
and outputs:
1) float
2) float16
3) int32
4) int16
5) int8
6) uint16
7) uint8
* cleanup setup.py module list definition
* Deduce index of input using tracked input index map
Ignores initializers in case they are ordered before inputs.
* Removed debug statement in MO code
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* PR feedback
* Removed per_sample_tolerance for openvino
* Removed unnecessary disabled tests
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Removed debug function
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Disabled tiny_yolo_v2 due to accuracy issues
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Changed the disabled reason for broken tests
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Disabled Reshape with no input
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Python formatting with Autopep8
* Minor fix for MYRIAD devices
* Added zero dimension check
*Removed setting batch size for the network
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Set the threshold to larger value for MNIST
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Removed setting higher threshold in provider_test_utils
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* Check for --use_openvino in python wheel setup.py
Add openvino modules to the setup script for building the wheel
package only for --use_openvino a build option.
* Removed nullptr checks for GetNode()
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
* subgraph with memcpy fix
* Linux compile errors fix
* Linux compile errors fix
* subgraph with memcpy fix
* Linux compile errors fix
* Linux compile errors fix
* memcpy (PR1020) fix implemented
* check graph viewer GetNode for nullptr at other plances
* documents
* Review changes (UseSubgraph simplified)
* static_cast<int> removed
* static_cast<int> removed 2
* fall back to CPU implementation in GetCapability()
* check shape for null. fall back to CPU implementation in GetCapability()
* backend data errors fixed
* PR review changes
* disable Opset10 tests
* removed tests from main.cc of test runner. added a check at GetCapability()
* backend data and Model-Zoo related fixes
* subgraph with memcpy fix
* Linux compile errors fix
* Linux compile errors fix
* subgraph with memcpy fix
* Linux compile errors fix
* memcpy (PR1020) fix implemented
* documents
* Review changes (UseSubgraph simplified)
* static_cast<int> removed
* fall back to CPU implementation in GetCapability()
* check shape for null. fall back to CPU implementation in GetCapability()
* backend data errors fixed
* PR review changes
* disable Opset10 tests
* removed tests from main.cc of test runner. added a check at GetCapability()
* backend data and Model-Zoo related fixes
* patch to run tests and models separatly
* As we consistently use non-const reference for modifiable arguments that cannot be null, update the conventions to reflect that.
Add a note on qualifying 'auto' to make the intent clearer and it easier to notice accidental copies.
* Address PR comment by adding a statement around disabling copy/assignment/move for new classes until needed.
* Intial commit
* Rename DynamicPad to Pad
* More changes
* Add Unique operator
* Revert accidental check-in
* Fix CUDA Pad to align with changes
* More changes
* Fix more CUDA pad source files
* More fixes
* More changes
* More changes
* Avoid vector copy
* Update vector validation logic
* Fix build failures
* Fix build
* Fix build failure
* Fix tensorrt build
* Accomodate missing optional 'axes' when 'steps' is present in Slice op (#946)
* Accomodate missing optional axes when steps is present in Slice implementation
* PR feedback
* Update package links (#937)
* Update package links
* Minor fix
* Update README.md
* Minor edit
* Update onnx commit (#949)
* Update onnx commit
* disable failing tests which don't have to be fixed for this release
* dummy change to fix file permission
* fix file permission
* add --gen_doc to ci_build
* make gen-doc conditional to build/test step
* some fix in the git diff check
* some more trick on doc diff
* updated for input/output
* updated the contrib operator doc
* fix on missing input output descriptions
* fixed the problem of missing doc string, due to protobuf optimization
* fix
* revert last change
* moved gen_doc.py to /tools/python
* fixed typo
* Simple integration into CMake build system
* Adds vcpkg as a submodule and updates build.py to install hosting dependencies
* Don't create vcpkg executable if already created
* Fixes how CMake finds toolchain file and quick changes to build.py
* Removes setting the CMAKE_TOOLCHAIN_FILE in build.py
* Adds Boost Beast echo server and Boost program_options
* Fixes spacing problem with program_options
* Adds Microsoft headers to all the beast server headers
* Removes CXX 14 from CMake file
* Adds TODO to create configuration class
* Run clang-format on main
* Better exception handling of program_options
* Remove vckpg submodule via ssh
* Add vcpkg as https
* Adds onnxruntime namespace to call classes
* Fixed places where namespaces were anonymous
* Adds a TODO to use the logger
* Moves all setting namespace shortnames outside of onnxruntime namespace
* Add onnxruntime session options to force app to link with it
* Set CMAKE_TOOLCHAIN_FILE in build.py
* Remove whitespace
* Adds initial ONNX Hosting tests (#5)
* Add initial test which is failing linking with no main
* Adds test_main to get hosting tests working
* Deletes useless add_executable line
* Merge changes from upstream
* Enable CI build in Vienna environment
* make hosting_run*.sh executable
* Add boost path in unittest
* Add boost to TEST_INC_DIR
* Add component detection task in ci yaml
* Get tests and hosting to compile with re2 (#7)
* Add finding boost packages before using it in unit tests
* Add predict.proto and build
* Ignore unused parameters in generated code
* Removes std::regex in favor of re2 (#8)
* Removes std::regex in favor of re2
* Adds back find_package in unit tests and fixes regexes
* Adds more negative test cases
* Adding more protos
* Fix google protobuf file path in the cmake file
* Ignore unused parameters for pb generated code
* Updates onnx submodule (#10)
* Remove duplicated lib in link
* Follow Google style guide (#11)
* Google style names
* Adds more
* Adds an additional namespace
* Fixes header guards to match filepaths
* Consume protobuf
* Unit Test setup
* Json deserialization simple test cases
* Split hosting app to lib and exe for testability
* Add more cases
* Clean up
* Add more comments
* Update namespace and format the cmake files
* Update cmake/external/onnx to checkout 1ec81bc6d49ccae23cd7801515feaadd13082903
* Separate h and cc in http folder
* Clean up hosting application cmake file
* Enable logging and proper initialize the session
* Update const position for GetSession()
* Take latest onnx and onnx-tensorrt
* Creates configuration header file for program_options (#15)
* Sets up PredictRequest callback (#16)
* Init version, porting from prototype, e2e works
* More executor implementation
* Adds function on application startup (#17)
* Attempts to pass HostingEnvironment as a shared_ptr
* Removes logging and environment from all http classes
* Passes http details to OnStart function
* Using full protobuf for hosting app build
* MLValue2TensorProto
* Revert back changes in inference_session.cc
* Refactor logger access and predict handler
* Create an error handling callback (#19)
* Creates error callback
* Logs error and returns back as JSON
* Catches exceptions in user functions
* Refactor executor and add some test cases
* Fix build warning
* Add onnx as a dependency and in includes to hosting app (#20)
* Converter for specific types and more UTs
* More unit tests
* Update onnx submodule
* Fix string data test
* Clean up code
* Cleanup code
* Refactor logging to use unique id per request and take logging level from user (#21)
* Removes capturing env by reference in main
* Uses uuid for logging ids
* Take logging_level as a program argument
* Pass logging_level to default_logging_manager
* Change name of logger to HostingApp
* Log if request id is null
* Update GetHttpStatusCode signature
* Fix random result issue and camel-case names
* Rollback accidentally changed pybin_state.cc
* Rollback pybind_state.cc
* Generate protobuf status from onnxruntime status
* Fix function name in error message
* Clean up comments
* Support protobuf byte array as input
* Refactor predict handler and add unit tests
* Add one more test
* update cmake/external/onnx
* Accept more protobuf MIME types
* Update onnx-tensorrt
* Add build instruction and usage doc
* Address PR comments
* Install g++-7 in the Ubuntu 16.04 build image for vcpkg
* Fix onnx-tensorrt version
* Check return value during initialization
* Fix infinite loop when http port is in use (#29)
* Simplify Executor.cc by breaking up Run method (#27)
* Move request id to Executor constructor
* Refactor the logger to respect user verbosity level
* Use Arena allocator instead of device
* Creates initial executor tests
* Merge upstream master (#31)
* Remove all possible shared_ptrs (#30)
* Changes GetLogger to unique_ptr
* Reserve BFloat raw data vector size
* Change HostingEnvironment to being passed by lvalue and rvalue references
* Change routes to getting passed by const references
* Enable full protobuf if building hosting (#32)
* Building hosting application no longer needs use_full_protobuf flag
* Improve hosting application docs
* Move server core into separate folder (#34)
* Turn hosting project off by default (#38)
* Remove vcpkg as a submodule and download/install Boost from source (#39)
* Remove vcpkg
* Use CMake script to download and build Boost as part of the project
* Remove std::move for const references
* Remove error_code.proto
* Change wording of executable help description
* Better GenerateProtobufStatus description
* Remove error_code protobuf from CMake files
* Use all outputs if no filter is given
* Pass MLValue by const reference in MLValueToTensorProto
* Rename variables to argc and argv
* Revert "Use all outputs if no filter is given"
This reverts commit 7554190ab8e50ba6947648c2f3e2a3d4d9606ce0.
* Remove all header guards in favor of #pragma once
* Reserve size for output vector and optimize for-loop
* Use static libs by default for Boost
* Improves documentation for GenerateResponseInJson function
* Start Result enum at 0 instead of 1
* Remove g++ from Ubuntu's install.sh
* Update cmake files
* Give explanation for Result enum type
* Remove all program options shortcuts except for -h
* Add comments for predict.proto
* Fix JSON for error codes
* Add notice on hosting application docs that it's in beta
* Change HostingEnvironment back to a shared_ptr
* Handle empty output_filter field
* Fix build break
* Refactor unit tests location and groups
* First end-to-end test
* Add missing log
* Missing req id and client req id in error response
* Add one test case to validate failed resp header
* Add build flag for hosting app end to end tests
* Update pipeline setup to run e2e test for CI build
* Model Zoo data preparation and tests
* Add protobuf tests
* Remove mention of needing g++-7 in BUILD.md
* Make GetAppLogger const
* Make using_raw_data_ match the styling of other fields
* Avoid copy of strings when initializing model
* Escape JSON strings correctly for error messages (#44)
* Escape JSON strings correctly
* Add test examples with lots of carriage returns
* Add result validation
* Remove temporary path
* Optimize model zoo test execution
* Improve reliability of test cases
* Generate _pb2.py during the build time
* README for integration tests
* Pass environment by pointer instead of shared_ptr to executor (#49)
* More Integration tests
* Remove generated files
* Make session private and use a getter instead (#53)
* logging_level to log_level for CLI
* Single model prediction shortcut
* Health endpoint
* Integration tests
* Rename to onnxruntime server
* Build ONNX Server application on Windows (#57)
* Gets Boost compiling on Windows
* Fix integer conversion and comparison problems
* Use size_t in converter_tests instead of int
* Fix hosting integration tests on Windows
* Removes checks for port because it's an unsigned short
* Fixes comparison between signed and unsigned data types
* Pip install protobuf and numpy
* Missing test data from the rename change
* Fix server app path (#58)
* Pass shared_ptr by const reference to avoid ref count increase (#59)
* Download test model during test setup
* Make download into test_util
* Rename ci pipeline for onnx runtime server
* Support up to 10MiB http request (#61)
* Changes minimum request size to 10MB to support all models in ONNX Model Zoo
* added tools for doc gen, added doc
* doc updated
* some fixes
* hooked up with build.py
* hooked up with build.py and fail on nonupdated doc
* update
* Fixed typos in docs for 'onnx_test_runner'
* TensorRT Execution Provider (preview) release
Updated build instructions and component governence and third party notices for TensorRT execution provider release.
* test runner option for tensorrt
updated to add option for tensorrt.
* Introduction to TensorRT Execution Provider
Intro README for TensorRT Execution Provider.
* Update BUILD.md
* Update TensorRT-ExecutionProvicer.md
* corrected typo in the filename
* corrected typos
* updated with corrections.
* removed conflicting edits.
* Update BUILD.md
* Prototype version that demonstrates it can work
* Switched to OrtValue and removed the OrtCustomOpTensor code.
* Support multiple outputs and reading of attributes
* Add custom domain handling to custom ops
* Update documentation
* more wording changes
* Addl TPN updates (#403)
* Updated TPN
* Update batch_norm_op_test.cc
* Update ThirdPartyNotices.txt
* Update ThirdPartyNotices.txt
* Update readme with package links
* Update README.md
* Update README.md
* Update README.md
* Merged Ryan and TPN changes into single PR
* minor fix
* added mkldnn to GPU pipeline. Required by C# library as it is the default execution provider
* Bump up version number for 0.2.1 release (#420)
* Eliminate the confusing double negative
I was having trouble parsing the caveat NOTE, proposing wording changes that I think reflect the meaning and avoid the confusion.
* Eliminate double negative without further explanation on role of this file.
Incorporate @pranavsharma feedback.
* updated nuget package metadata for MS compliance (#66)
* fixed metadata element -- use PackageProjectUrl instead of ProjectUrl (#67)
* Change version to 0.1.5
* Update README.md
* Update Versioning.md
* Update rename_manylinux.sh
Remove duplicate word
* Update README.md
Remove a 'the' as ONNX Runtime is a proper noun.
* Update CUDA version to 9.1 cudnn version to 7.1
* Update ReleaseManagement.md
* put tensorflow copy-right headers
there are around 10 lines of code is borrowed from tflite.
* Update README.md
Mention C++ API
* Update README.md
Fix link
* Update C_API.md
Fix broken link to onnxruntime_c_api.h
* Update ABI.md
Delete mention of COM and fix 'ONNX Runtime' to be two words
* Update README.md
* Update README.md
* Update C_API.md