* Enabling save/Load blob feature for OpenVINO-EP
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Added changes to enhance save/load feature
->This feature applies only for MYRIAD device target
->cleaned up the code and added error checks
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Enabled the feature only for MyriadX and only for Linux
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Fixed compilation issues on windows
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Added changes to fix const subgraph issue
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Fixed issues on windows
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Added changes for the feature
-> Removed default location dir dump using cmake
-> Enabled saving blob dumps at the executable path
by default
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Made save/load dump path configurable
-> The save/load blob dump path is now also made configurable
using a c/python Api's.
-> Introduced a flag named blob_dump_path
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Minor fixes added
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Fixed python API issues
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Using GetEnvironmentVar to get the path
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Fixed python runtime option issue
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* Fixes import network issue on windows
Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
* ConvGrad CUDA impl
* Set up the test case for Deberta Conv1D
* Add fp16 test
Co-authored-by: Sherlock Huang <bahuang@OrtTrainingDev3.af05slrtruoetgaxwwjv5nsq5e.px.internal.cloudapp.net>
* Avoid passing zero bias to Gemm in gradients
The bias argument to Gemm is optional and defaults to zero. Therefore we do not need to generate zero initializers and pass them to that argument.
* Remove unused declaration.
* Simplified version of WebAssembly support to keep most of existing data structures and add cmake using Ninja and emcmake
* Clean up CMakeLists.txt and add an example to create and compute a kernel
* Load a model from bytes and remove graph building steps
* Add all cpu and contrib ops with mlas library
* WebAssembly build with Onnxruntime C/CXX API
* Use protobuf cmakefile directory instead of adding every necessary source file
* Fix invalid output at example
* add missing files
* Change an example to use Teams model and support ort mobile format
* add API for javascript
* fix input releasing in _ort_run()
* update API
* Let onnxruntime cmake build WebAssembly with option '--wasm'
* allow one-step building for wasm
* Make build script working on Linux and MacOS
* Fix broken build from Windows command
* Enable unit test on building WebAssembly
* Resolve comments
* update build flags
* wasm conv improvement from: 1) GemmV; 2) Depthwise direct convolution 3x3; 3) Direct convolution 3x3
* Cleaned mlas unittest.
* use glob
* update comments
* Update baseline due to loss scale fix (#6948)
* fix stream sync issue (#6954)
* Enable type reduction in EyeLike, Mod, random.cc CPU kernels. (#6960)
* Update EyeLike CPU kernel.
* Update Mod CPU kernel.
* Update Multinomial CPU kernel.
* Slight improvement to Pad CPU kernel binary size.
* Update RandomNormal[Like], RandomUniform[Like] CPU kernels.
* Fix warning from setting multiple MSVC warning level options. (#6917)
Fix warning from setting multiple MSVC warning level options. Replace an existing /Wn flag instead of always appending a new one.
* MLAS: quantized GEMM update (#6916)
Various updates to the int8_t GEMMs:
1) Add ARM64 udot kernel to take advantage of dot product instructions available in newer cores. Some models run 4x faster than the stock implementation we used before.
2) Refactor the x64 kernels to share common code for AVX2(u8u8/u8s8/avxvnni) vs AVX512(u8u8/u8s8/avx512vnni) to reduce binary size.
3) Extend kernels to support per-column zero points for matrix B. This is not currently wired to an operator.
* Implement QLinearAveragePool with unit tests. (#6896)
Implement QLinearAveragePool with unit tests.
* Attention fusion detect num_heads and hidden_size automatically (#6920)
* fixed type to experimental session constructor (#6950)
* fixed type to experimental session constructor
Co-authored-by: David Medine <david.medine@brainproducts.com>
* Update onnxruntime_perf_test.exe to accept free dimension overrides (#6962)
Co-authored-by: Ori Levari <orlevari@microsoft.com>
* Fix possible fd leak in NNAPI (#6966)
* Release buffers for prepacked tensors (#6820)
Unsolved problems:
1. One test failure was caused by a bug in Cudnn rnn kernels, when they can allocate a buffer and partially initialize it, the garbage data near tail of the buffer caused problem in some of the hardware. To attack this problem in a broader sense, should we add code in our allocators, and during a memory fuzzing test, fill an allocated buffer with garbage before returning to the caller?
2. Prepacking is used more widely than we know. For instance, Cudnn rnn kernels also cache their weights. They mix several weight tensors together into a single buffer, and never touch the original weight tensor anymore. This is the same idea with pre-pack, but they didn't override the virtual function, and they never tried to release those weight tensors, leading to memory waste. It also seems to me that there are some other kernels have similar behavior. Wonder how much memory we can save if we try to cleanup those too.
3. Turning off memory pattern planning does increase memory fragmentation, leading to out of memory error in some training test cases. Perhaps we can revisit the idea of pushing kernels-creation stage earlier, and then during initializer deserialization, we only avoid tracing those that will be prepacked.
* Enable type reduction for Range, ReverseSequence, ScatterND, Split, and Unique CPU kernels. (#6963)
* add CI
* fix test in ci
* fix flags for nsync in wasm build
* add copyright banner
* fix wasm source glob
* add missing exports
* resolve comments
* Perf gain by make packb wide to 4 from 16 on GEMM for WASM.
Remove no need direct conv in previous perf tuning.
* fix buildbreak introduced from latest master merge
* fix buildbreak in mlasi.h
* resolve all comments except MLAS
* rewrite packb related 3 functions for WASM_SCALAR seperately rather than using #ifdef in each.
and other changes according to PR feedback in mlas.
* More complete scalar path in sgemm from Tracy.
* Fix edge case handling in depthwise conv2d kernel 3x3. where:
*) support input W==1 and H==1
*) recalc in accurate pad_right and pad_bottom
*) support hidden pad_right == 2 or pad_bottom == 2 when W == 1 or H==1 and no pad left/top
* Add more test coverage for conv depthwise from Tracy.
Fix one typo according to PR.
* resolve comments
* replace typedef by using
* do not use throw in OrtRun()
* output error message
Co-authored-by: Sunghoon <35605090+hanbitmyths@users.noreply.github.com>
Co-authored-by: Lei Zhang <zhang.huanning@hotmail.com>
Co-authored-by: Wei-Sheng Chin <wschin@outlook.com>
Co-authored-by: Tianlei Wu <tlwu@microsoft.com>
Co-authored-by: Edward Chen <18449977+edgchen1@users.noreply.github.com>
Co-authored-by: Tracy Sharpe <42477615+tracysh@users.noreply.github.com>
Co-authored-by: David Medine <david.eric.medine@gmail.com>
Co-authored-by: David Medine <david.medine@brainproducts.com>
Co-authored-by: Ori Levari <ori.levari@microsoft.com>
Co-authored-by: Ori Levari <orlevari@microsoft.com>
Co-authored-by: Guoyu Wang <62914304+gwang-msft@users.noreply.github.com>
Co-authored-by: Chen Fu <chenfucs@gmail.com>
* Enabled rocm support for graph transformations
* Support for external Hip allocator
* Added const_cast to reinterpret_cast to fix compiler issue
* Another crack at fixing the compile error
* More compilation fixes
* Added compilation flags to load_inline extension
* Added ROCM, ROCM_PINNED constants
* Changes to address PR comments
* Changed gpu identifier from ROCM to CUDA
* Added HIP compilation flag for torch inline functions
* Fixed a typo in header allocator string formatting
* Fix for runtime error with external_cuda_allocator
* Removed cuda/rocm specific code paths for allocators
* More name changes to generic gpu from rocm/cuda
* Removed duplicate allocator creation
* Rename cuda_external_ config options as gpu_external_
* Rename hip_mem_limit to gpu_mem_limit
* Rename cuda_mem_limit to gpu_mem_limit
* Where and Clip cuda kernel support
* GreaterOrEqual and LessOrEqual cuda kernels
* Clip input GPU mem
* review comments
* Add CPU kernel as well
* review comment
* Add kernel def hash for new op kernels
* Fix CI
With this change, differentiating CUDA EP and ROCm EP is not needed in training script when mem_limit option needs to be set.
Co-authored-by: Weixing Zhang <wezhan@microsoft.com>
use unordered_set instead of unordered_map to keep track of dynamic shape tensors with shape updates
fix: insert input_name in the set of input_names
move trt_profile to TensorrtFuncState and reuse it
* Integrate openvino-ep-2021.3
* operators type
* changed the myriad as it is case sensitive
* logging information for openvino-ep-2021.3
* Unit test fix
* Resize operator added for myriad
* Fixed python tests for CPU and GPU
* data commit for loop tile and gatherelements failure
* adding checks for Where
* fixing gatherelements and loop tests
* disabling instance normalization test for now as there seems to be a
myriad bug, putting loop in ops supported only because all the tests
fail
* gather elements op test taking care of warning message
* condition needs to be an intializers
* Disabled python test for Myriad
* Disable compilation warning for MSVC windows compiler
* softmax_test, threedimaxis0 and 1 test give accuracy mismatch
tensoroptest disables test gives accuracy mismatch
gather test gives accuracy mismatch
* Updated with ov version 2021.3
* Updated with ov version 2021.3
* Updated README
* Disabling python tests for cpu
* Disabling python tests with accuracy mismatch on cpu
* Added fix for Linux CI Pipeline failure
-> Disabled tests that were throwing segfault
Co-authored-by: sfatimar <sahar.fatima@intel/com>
Co-authored-by: MaajidKhan <n.maajidkhan@gmail.com>
Co-authored-by: Aravind <aravindx.gunda@intel.com>