Commit graph

4596 commits

Author SHA1 Message Date
Maajid khan
27e778909d
[OpenVINO-EP] Enabling save/Load blob feature (#7054)
* Enabling save/Load blob feature for OpenVINO-EP

Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>

* Added changes to enhance save/load feature

->This feature applies only for MYRIAD device target
->cleaned up the code and added error checks

Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>

* Enabled the feature only for MyriadX and only for Linux

Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>

* Fixed compilation issues on windows

Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>

* Added changes to fix const subgraph issue

Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>

* Fixed issues on windows

Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>

* Added changes for the feature

-> Removed default location dir dump using cmake
-> Enabled saving blob dumps at the executable path
   by default

Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>

* Made save/load dump path configurable

-> The save/load blob dump path is now also made configurable
using a c/python Api's.

-> Introduced a flag named blob_dump_path

Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>

* Minor fixes added

Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>

* Fixed python API issues

Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>

* Using GetEnvironmentVar to get the path

Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>

* Fixed python runtime option issue

Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>

* Fixes import network issue on windows

Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com>
2021-04-07 20:59:16 -07:00
Chen Fu
def4cc09c7
Add QGEMM benchmark (#7268)
* Add QGEMM benchmark
2021-04-07 20:24:49 -07:00
Sherlock
aa2c465143
Restrict ConvGrad to __CUDA_ARCH__>=700 (#7278)
* Restrict ConvGrad to __CUDA_ARCH__>=700

Co-authored-by: Sherlock Huang <bahuang@OrtTrainingDev3.af05slrtruoetgaxwwjv5nsq5e.px.internal.cloudapp.net>
2021-04-07 20:10:29 -07:00
Vincent Wang
beb299e17d
ConvGrad CUDA Kernel Bugfix (#7273)
* bugfix

* add ut
2021-04-08 08:22:18 +08:00
baijumeswani
844361bc67
Support eval mode and torch.no_grad context in ORTModule and restructure ortmodule.py (#7162) 2021-04-07 09:29:54 -07:00
George Wu
025abf996d
fix for using tensorrt:20.12 base image (#7264) 2021-04-07 08:48:43 -07:00
Sherlock
4bc17ca04e
CUDA ConvGrad Kernel (#7227)
* ConvGrad CUDA impl

* Set up the test case for Deberta Conv1D

* Add fp16 test

Co-authored-by: Sherlock Huang <bahuang@OrtTrainingDev3.af05slrtruoetgaxwwjv5nsq5e.px.internal.cloudapp.net>
2021-04-06 22:09:06 -07:00
TomWildenhain-Microsoft
8219518aa8
Fix initializer counts when used as graph output (#7260)
Signed-off-by: Tom Wildenhain <tomwi@microsoft.com>
2021-04-06 22:52:22 -04:00
Jesse Benson
2ec452cdad Remove ROCM workaround for half-to-double cast. 2021-04-06 17:46:46 -07:00
Derek Murray
25e261f196
Avoid passing zero bias to Gemm in gradients (#7244)
* Avoid passing zero bias to Gemm in gradients

The bias argument to Gemm is optional and defaults to zero. Therefore we do not need to generate zero initializers and pass them to that argument.

* Remove unused declaration.
2021-04-06 16:49:34 -07:00
Yulong Wang
405ca49012
build ONNXRuntime into WebAssembly (#6478)
* Simplified version of WebAssembly support to keep most of existing data structures and add cmake using Ninja and emcmake

* Clean up CMakeLists.txt and add an example to create and compute a kernel

* Load a model from bytes and remove graph building steps

* Add all cpu and contrib ops with mlas library

* WebAssembly build with Onnxruntime C/CXX API

* Use protobuf cmakefile directory instead of adding every necessary source file

* Fix invalid output at example

* add missing files

* Change an example to use Teams model and support ort mobile format

* add API for javascript

* fix input releasing in _ort_run()

* update API

* Let onnxruntime cmake build WebAssembly with option '--wasm'

* allow one-step building for wasm

* Make build script working on Linux and MacOS

* Fix broken build from Windows command

* Enable unit test on building WebAssembly

* Resolve comments

* update build flags

* wasm conv improvement from: 1) GemmV; 2) Depthwise direct convolution 3x3; 3) Direct convolution 3x3

* Cleaned mlas unittest.

* use glob

* update comments

* Update baseline due to loss scale fix (#6948)

* fix stream sync issue (#6954)

* Enable type reduction in EyeLike, Mod, random.cc CPU kernels. (#6960)

* Update EyeLike CPU kernel.

* Update Mod CPU kernel.

* Update Multinomial CPU kernel.

* Slight improvement to Pad CPU kernel binary size.

* Update RandomNormal[Like], RandomUniform[Like] CPU kernels.

* Fix warning from setting multiple MSVC warning level options. (#6917)

Fix warning from setting multiple MSVC warning level options. Replace an existing /Wn flag instead of always appending a new one.

* MLAS: quantized GEMM update (#6916)

Various updates to the int8_t GEMMs:

1) Add ARM64 udot kernel to take advantage of dot product instructions available in newer cores. Some models run 4x faster than the stock implementation we used before.
2) Refactor the x64 kernels to share common code for AVX2(u8u8/u8s8/avxvnni) vs AVX512(u8u8/u8s8/avx512vnni) to reduce binary size.
3) Extend kernels to support per-column zero points for matrix B. This is not currently wired to an operator.

* Implement QLinearAveragePool with unit tests. (#6896)

Implement QLinearAveragePool with unit tests.

* Attention fusion detect num_heads and hidden_size automatically (#6920)

* fixed type to experimental session constructor (#6950)

* fixed type to experimental session constructor

Co-authored-by: David Medine <david.medine@brainproducts.com>

* Update onnxruntime_perf_test.exe to accept free dimension overrides (#6962)

Co-authored-by: Ori Levari <orlevari@microsoft.com>

* Fix possible fd leak in NNAPI (#6966)

* Release buffers for prepacked tensors (#6820)

Unsolved problems:

1. One test failure was caused by a bug in Cudnn rnn kernels, when they can allocate a buffer and partially initialize it, the garbage data near tail of the buffer caused problem in some of the hardware. To attack this problem in a broader sense, should we add code in our allocators, and during a memory fuzzing test, fill an allocated buffer with garbage before returning to the caller?


2. Prepacking is used more widely than we know. For instance, Cudnn rnn kernels also cache their weights. They mix several weight tensors together into a single buffer, and never touch the original weight tensor anymore. This is the same idea with pre-pack, but they didn't override the virtual function, and they never tried to release those weight tensors, leading to memory waste. It also seems to me that there are some other kernels have similar behavior. Wonder how much memory we can save if we try to cleanup those too.

3. Turning off memory pattern planning does increase memory fragmentation, leading to out of memory error in some training test cases. Perhaps we can revisit the idea of pushing kernels-creation stage earlier, and then during initializer deserialization, we only avoid tracing those that will be prepacked.

* Enable type reduction for Range, ReverseSequence, ScatterND, Split, and Unique CPU kernels. (#6963)

* add CI

* fix test in ci

* fix flags for nsync in wasm build

* add copyright banner

* fix wasm source glob

* add missing exports

* resolve comments

* Perf gain by make packb wide to 4 from 16 on GEMM for WASM.
Remove no need direct conv in previous perf tuning.

* fix buildbreak introduced from latest master merge

* fix buildbreak in mlasi.h

* resolve all comments except MLAS

* rewrite packb related 3 functions for WASM_SCALAR seperately rather than using #ifdef in each.
and other changes according to PR feedback in mlas.

* More complete scalar path in sgemm from Tracy.

* Fix edge case handling in depthwise conv2d kernel 3x3. where:
  *) support input W==1 and H==1
  *) recalc in accurate pad_right and pad_bottom
  *) support hidden pad_right == 2 or pad_bottom == 2 when W == 1 or H==1 and no pad left/top

* Add more test coverage for conv depthwise from Tracy.
Fix one typo according to PR.

* resolve comments

* replace typedef by using

* do not use throw in OrtRun()

* output error message

Co-authored-by: Sunghoon <35605090+hanbitmyths@users.noreply.github.com>
Co-authored-by: Lei Zhang <zhang.huanning@hotmail.com>
Co-authored-by: Wei-Sheng Chin <wschin@outlook.com>
Co-authored-by: Tianlei Wu <tlwu@microsoft.com>
Co-authored-by: Edward Chen <18449977+edgchen1@users.noreply.github.com>
Co-authored-by: Tracy Sharpe <42477615+tracysh@users.noreply.github.com>
Co-authored-by: David Medine <david.eric.medine@gmail.com>
Co-authored-by: David Medine <david.medine@brainproducts.com>
Co-authored-by: Ori Levari <ori.levari@microsoft.com>
Co-authored-by: Ori Levari <orlevari@microsoft.com>
Co-authored-by: Guoyu Wang <62914304+gwang-msft@users.noreply.github.com>
Co-authored-by: Chen Fu <chenfucs@gmail.com>
2021-04-06 16:18:10 -07:00
ashbhandare
2aa89989c4
Not-where fusion (#7182)
* Not-where fusion

* Change to rewrite rule

* Add to inference transforms

* Support numtiple where consumers

* review comments
2021-04-06 16:12:26 -07:00
Yufeng Li
790fc11e60
QDQ: type conversion and more ops support (#7243)
* QDQ: add int8_t to uint8_t conversion and Relu/AveragePool support
2021-04-06 15:30:31 -07:00
raviskolli
5d759e182b
Allocate external Rocm allocator via PyBind (#7148)
* Enabled rocm support for graph transformations

* Support for external Hip allocator

* Added const_cast to reinterpret_cast to fix compiler issue

* Another crack at fixing the compile error

* More compilation fixes

* Added compilation flags to load_inline extension

* Added ROCM, ROCM_PINNED constants

* Changes to address PR comments

* Changed gpu identifier from ROCM to CUDA

* Added HIP compilation flag for torch inline functions

* Fixed a typo in header allocator string formatting

* Fix for runtime error with external_cuda_allocator

* Removed cuda/rocm specific code paths for allocators

* More name changes to generic gpu from rocm/cuda

* Removed duplicate allocator creation

* Rename cuda_external_ config options as gpu_external_

* Rename hip_mem_limit to gpu_mem_limit

* Rename cuda_mem_limit to gpu_mem_limit
2021-04-06 15:23:51 -07:00
Derek Murray
6308e709cc
Update opset for other training graphs to 12. (#7259)
Co-authored-by: Derek Murray <demurra@microsoft.com>
2021-04-06 13:02:59 -07:00
G. Ramalingam
a9ff4c29e5
Add function body to GeluGrad schema (#7190)
* Add GeluGrad function definition

* complete gelugrad function definition

* add opset to function definition
2021-04-06 12:40:59 -07:00
Zhang Lei
dbcfc4bee6
Add mlas_bench tools. Starting with sconv bench and sgemm bench. (#7139)
* Add mlas_bench tools. Starting with sconv bench and sgemm bench.

* Some update with build related.
2021-04-06 10:30:18 -07:00
ashari4
56b22c1c6b
Fix assert that the tensor's device type is 'cpu' #7248 2021-04-06 09:08:32 -07:00
ashbhandare
e9ffcfa247
Add cuda kernels for GreaterOrEqual, LessOrEqual, Where; modify Clip to avoid memcpy (#7187)
* Where and Clip cuda kernel support

* GreaterOrEqual and LessOrEqual cuda kernels

* Clip input GPU mem

* review comments

* Add CPU kernel as well

* review comment

* Add kernel def hash for new op kernels

* Fix CI
2021-04-06 09:04:38 -07:00
Derek Murray
c85657cfd7
Update test_training_model.onnx to opset 12. (#7251)
Co-authored-by: Derek Murray <demurra@microsoft.com>
2021-04-06 07:49:58 -07:00
Tracy Sharpe
a9dbb511fb
MLAS: fix qgemm bus error with Android + ARM32 (#7250) 2021-04-05 22:46:04 -07:00
Olivia Jain
fb40602ea2
Mem trt (#6868)
* adding trt comparison and memory consumption

* creating separate docker file
2021-04-05 22:16:12 -07:00
Changming Sun
2fcd69d644
Cleanup build.py (#7245) 2021-04-05 18:49:29 -07:00
Changming Sun
5bd192c439
Update ContribOperators.md (#7246) 2021-04-05 17:11:33 -07:00
Pranav Prakash
3b16afc0db
Make dW optional for convgrad (#7083) 2021-04-05 17:05:20 -07:00
Guoyu Wang
c5973fbbac
Update the build script for Android AAR package (#7229)
* Update the build script for Android AAR package

* Address CR comments
2021-04-05 16:37:22 -07:00
Suffian Khan
9f14af9809
Add BERT-L perf regression test on MI100 and re-enable batch size test (#7240)
* restore bs test and add perf test

* update perf number and fix path to results
2021-04-05 15:51:52 -07:00
Ryan Lai
10102c09b6
Add better model test error messaging (#7239) 2021-04-05 14:59:19 -07:00
Ashwini Khade
e7c5dcd572
Fix Zip-Nuget-Java Packaging Pipeline (#7208)
* Ignore test failures due to opset support

* skip identity sequence test

* plus fixes
2021-04-05 10:58:13 -07:00
Chun-Wei Chen
3ee9b0ec4d
Add detailed assertion error message (#7232) 2021-04-05 10:05:40 -07:00
Marek Šuppa
008065aab1
Update README.md (#7043)
* Fix the precision type (switch from nonexistent `int32` to `fp32`).
2021-04-05 10:03:14 -07:00
ashbhandare
2b8513539e
Div mul fusion (#7183)
* Div mul fusion

* Change to rewrite rule

* Add to inference transformers
2021-04-05 09:35:30 -07:00
Weixing Zhang
74ee24cf7f
rename cuda_mem_limit and hip_mem_limit to gpu_mem_limit for both CUDA EP and ROCm EP (#7226)
With this change, differentiating CUDA EP and ROCm EP is not needed in training script when mem_limit option needs to be set.

Co-authored-by: Weixing Zhang <wezhan@microsoft.com>
2021-04-05 09:04:04 -07:00
baijumeswani
68b12a6179
Support for saving and loading pytorch compatible state dictionaries (#7220)
* Override methods on torch.nn.Module to get direct access to the methods on the original module.
2021-04-05 03:40:41 -07:00
Yufeng Li
8d737f9770
handle optional input in quant topo sort (#7223) 2021-04-02 20:42:48 -07:00
Weixing Zhang
59b57d8322
HSA_NO_SCRATCH_RECLAIM and RCCL_ALLTOALL_KERNEL_DISABLE are not needed for ROCm 4.1 (#7224)
Co-authored-by: Weixing Zhang <wezhan@microsoft.com>
2021-04-02 18:19:11 -07:00
Ahmad Zakaria
ba5f056b09
move trt_profile to TensorrtFuncState and reuse it (#7195)
use unordered_set instead of unordered_map to keep track of dynamic shape tensors with shape updates

fix: insert input_name in the set of input_names

move trt_profile to TensorrtFuncState and reuse it
2021-04-02 17:09:03 -07:00
Weixing Zhang
ef88dc912c
enable more unit tests for ROCM EP (#7222) 2021-04-02 15:57:08 -07:00
Guoyu Wang
afbbeaa30a
[NNAPI/CoreML EP] Add Onnx opset 14 support (#7211)
* Add opset 14 support for nnapi/coreml ep

* Address CR comments
2021-04-02 13:18:47 -07:00
Sherlock
a98c2ebb8c
Enable saving optimized models in OrtModule (#7214)
* Enable saving optimized models in OrtModule

Co-authored-by: Sherlock Huang <bahuang@OrtTrainingDev3.af05slrtruoetgaxwwjv5nsq5e.px.internal.cloudapp.net>
2021-04-02 12:37:05 -07:00
RandySheriffH
ebde320950
Add cupti path for python gpu packaging pipeline (#7200)
* add cupti dll path for py3.8

* correct path

* add prints

* replace path join

* add all path

* restore pipeline

* format

* expand path only for python 38&39

* add all cupti path

Co-authored-by: Randy Shuai <rashuai@microsoft.com>
2021-04-02 12:12:46 -07:00
Weixing Zhang
2d352056cf
Support SkipLayerNorm for ROCm EP (#7210)
Co-authored-by: Weixing Zhang <wezhan@microsoft.com>
2021-04-02 09:03:30 -07:00
Weixing Zhang
a3f17c8b0d
update lamb and GatherGrad kernel for ROCm EP (#7184)
With ROCm4.1, the CUDA implementation of Lamb and GatherGrad can be
utilized for ROCm EP.
2021-04-02 09:02:49 -07:00
Weixing Zhang
17f91ff410
remove un-needed header file. (#7193)
Co-authored-by: Weixing Zhang <wezhan@microsoft.com>
2021-04-01 21:05:58 -07:00
Ryan Hill
5a6d477625
Make IDataTransfer be directly shared with shared providers (#7215) 2021-04-01 20:39:16 -07:00
Edward Chen
0ebeaf529d
Check kernel def hashes (#7120)
Add unit test for verifying kernel def hashes.
Add way to add new types to kernel definition without changing hash.
2021-04-01 17:42:58 -07:00
ashbhandare
15c67ddbf0
Make output 1 of ConcatTraining Optional and place on CPU (#7199)
* Optional input 1 on CPU ConcatTraining

* Rename output_1
2021-04-01 16:05:17 -07:00
Jesse Benson
4543459984 MIOpen supports MIOPEN_REDUCE_TENSOR_AVG now. 2021-04-01 16:00:34 -07:00
Yufeng Li
34a8b22186
disable prepacking in training (#7201)
* disable prepacking in training
2021-04-01 14:03:47 -07:00
sfatimar
52bcef4d4f
Openvino ep 2021.3 (#7180)
* Integrate openvino-ep-2021.3

* operators type

* changed the myriad as it is case sensitive

* logging information for openvino-ep-2021.3

* Unit test fix

* Resize operator added for myriad

* Fixed python tests for CPU and GPU

* data commit for loop tile and gatherelements failure

* adding checks for Where

* fixing gatherelements and loop tests

* disabling instance normalization test for now as there seems to be a
myriad bug, putting loop in ops supported only because all the tests
fail

* gather elements op test taking care of warning message

* condition needs to be an intializers

* Disabled python test for Myriad

* Disable compilation warning for MSVC windows compiler

* softmax_test, threedimaxis0 and 1 test give accuracy mismatch
tensoroptest disables test gives accuracy mismatch
gather test gives accuracy mismatch

* Updated with ov version 2021.3

* Updated with ov version 2021.3

* Updated README

* Disabling python tests for cpu

* Disabling python tests with accuracy mismatch on cpu

* Added fix for Linux CI Pipeline failure

-> Disabled tests that were throwing segfault

Co-authored-by: sfatimar <sahar.fatima@intel/com>
Co-authored-by: MaajidKhan <n.maajidkhan@gmail.com>
Co-authored-by: Aravind <aravindx.gunda@intel.com>
2021-04-01 11:28:54 -07:00