* Add ability to initialize InferenceSession with a model that is already loaded.
* Cleanup some unnecessary namespace qualifications and some long lines.
* Remove InferenceSession::Initialize(std::shared_ptr<Model>&)
* Remove unit test for init from existing Model instance.
* passed the OnnxRuntimeBuildDirectory to the docker
* removed the requirement for the docker host to set the env var
* set the env var to the path where the build dir is mounted in the container
* Copy mkldnn to output folder for linux. Nuget doesn't resolve dll dependency correctly within a package
* Modify to copy all dlls to output folder
* update rpath for shared library
* Simplified linker flags for RPATH
* Removing copying of dlls to output folder, since setting RPATH works fine now
* Improve VerifyKernelDef() performance when op has many inputs/outputs/type constraints.
* Added two modes for resolving type binding.
* Updated TypeBindingResolver to avoid heap allocation.
* Tweaked TypeBindingResolver for performance.
* Handle negative axes for reduce ops
* negative axes are not handled in shape inference if input shape
is not known at that time.
* nit: use HandleNegativeAxis in provider/common.h
* fixed typo in runtest.sh
* some fixes
* some fixes
* some fixes in the runtest.sh
* added test data url
* fixes on the dotnet test scripts
* fix on prior mistake regarding installation of apt-transport-https
* added verbosity in the test run for easy debugging
* updated comment in the runtest.sh
* Advance ONNX commit, move Ngram files under ONNX and rename to TfIdfVectorizer
* Rename Ngram to TfIdfVectorizer and redeclare in ONNX domain
* Restore tfidfvectorizer tests
* Remove ML definition.
* Update ONNX version to pickup Scan spec change that adds scan_output_axes.
Add logic to transpose an output
- write to temporary buffer when executing subgraph
- transpose temporary buffer into Scan output when execution completes
Add unit tests
* Update to ONNX dbf3581835e3a05716e10587511d7ab3b2cdc386 to pickup inferencing bugfix.
Update test to match.
* Disable some tests for opset 9 operators that haven't been implemented yet.
* matmul add fusion
* add shape check on Gemm input C
* walk around the issue with RemoveNode
* update the version support
* If MatMul has shape [K] * [K, N], update it to [1, K] * [K, N], so that it can work for Gemm
* Fuse Gemm+Activation into FusedGemm
* test
* revert the change which fuse the matmul with shape [K]*[K, N] to Gemm as shape [1, K]*[K, N], this may cause runtime failure, as the we can't change input data shape.
* revert the change which change the shape for Matmul from [K]*[K, N] to [1, K]*[K, N]. It enables fuse Matmul + Add to Gemm, but the issue is the data is not aware of this, so the data shape is still [K]*[K, N] and cause runtime issue.
* 1. Fix build issue for CUDA
2. Update Gemm so that we can fuse Matmul [K] * [K, N] + Add [1, N] into Gemm with shape [1,K] * [K, N] + [1, N]
* Fix build issue
* Fuse the activation node even it connects the output
* resolve the merge conflicts
* Add test model for Gemm+Activation fusion
* refactor kernel registry to make it a little bit more readable.
* update
* update cudaexecutionprovider
* fix build break
* fix comments
* fix build break
Root cause:
The cudaStreamWaitEvent is used after copy data from GPU memory to CPU memory, but the following node has CPU code depend on the data. Should use cudaEventSynchronize instead.
Fix:
Add code in executor to check the input memory type first, if it wants CPU memory, pass the CPUExecutionProvider type to BeforeUsingAsInput, then it will use cudaEventSynchronize to wait the write event.
* Revert to ignoring optional subgraph inputs due to abandoning PR 216. Restores previous behaviour that changed a couple of days ago with the Scan v9 checkin.
* Update to allow either all inputs, or just required inputs to be provided for the subgraph.
* Update IterateSequence to prefer all inputs over required inputs.
* switch to nonblocking threadpool in inference session and sessions state
* switch to eigen threadpool - first draft
* refine
* refine
* add a switch to easily revert back to windows thread pool
* switch thread pool in test runner and turn on leak checker
* remove unncessary files
* fix build error
* more build fixes
* catch exceptions in parallel executor
* fix mac build error
* fix mac build error
* more build fixes
* more mac build fixes
* fix cv issue
* change macro to include cuda compiler for disabled compiler warning
* try switching the macro to win32 only
* test #error
* move #disable warning to the top
* Update onnxruntime_framework.cmake
* move eigen include to public scope
* turn off eigenthreadpool by default and add todo comment
* update
* cmake change
* rename
* update
* update
* add cmake
* fix build warnings.
* fix comments
* update cmake to avoid run gemmlowp tests
* update cmake
* update
* fix build break
* update
* fix comments
* fix test failure
* add one more test case with padding.
* fix conv implementation of mkldnn and cuda to use updated computekernelshape function.
* fix linux ci build break
* Check the pads attribute on Conv, and auto fallback to CPU if it's not symmetric padding
* Insert copy nodes after all graph transformer. It causes some issue if do the cast transformer before memory copy transformer.