Add functionality to the Graph class to be dumped to protobuf using an external binary file for the float initializers.
This change is meant to avoid hitting the 2GB protobuf limit when dumping large graphs.
This limit was particularly easy to exceed when dumping graphs after auto-diff.
The use of the external file is limited to initializers larger than a user-specified threshold.
This gives the possibility to users to include in the onnx file shape constants used by Reshape and Transpose used by Shape Inference.
* fusion support runtime edge shape checking
* trim ctor
* add test
* fix
* Update test_shape_infer_helper.py
* use torch input size as dynamic axis hints
* check dir
* update
* support longformerattention
* update and add support for bert ops
* trim
* review comments
* review comments
Unsolved problems:
1. One test failure was caused by a bug in Cudnn rnn kernels, when they can allocate a buffer and partially initialize it, the garbage data near tail of the buffer caused problem in some of the hardware. To attack this problem in a broader sense, should we add code in our allocators, and during a memory fuzzing test, fill an allocated buffer with garbage before returning to the caller?
2. Prepacking is used more widely than we know. For instance, Cudnn rnn kernels also cache their weights. They mix several weight tensors together into a single buffer, and never touch the original weight tensor anymore. This is the same idea with pre-pack, but they didn't override the virtual function, and they never tried to release those weight tensors, leading to memory waste. It also seems to me that there are some other kernels have similar behavior. Wonder how much memory we can save if we try to cleanup those too.
3. Turning off memory pattern planning does increase memory fragmentation, leading to out of memory error in some training test cases. Perhaps we can revisit the idea of pushing kernels-creation stage earlier, and then during initializer deserialization, we only avoid tracing those that will be prepacked.
Various updates to the int8_t GEMMs:
1) Add ARM64 udot kernel to take advantage of dot product instructions available in newer cores. Some models run 4x faster than the stock implementation we used before.
2) Refactor the x64 kernels to share common code for AVX2(u8u8/u8s8/avxvnni) vs AVX512(u8u8/u8s8/avx512vnni) to reduce binary size.
3) Extend kernels to support per-column zero points for matrix B. This is not currently wired to an operator.
* Update EyeLike CPU kernel.
* Update Mod CPU kernel.
* Update Multinomial CPU kernel.
* Slight improvement to Pad CPU kernel binary size.
* Update RandomNormal[Like], RandomUniform[Like] CPU kernels.
* Added code for Relugrad with GPU support.
Signed-off-by: Chethan Palangotu Keshava <chethan.palangotu.keshava@intel.com>
* Add GPU support for DNNL ConvGrad
Signed-off-by: George Nash <george.nash@intel.com>
* Add GPU support for DNNL MaxPoolGrad
Updates to MaxPool for training with GPU
Update oneDNN to version 1.8.1
Signed-off-by: George Nash <george.nash@intel.com>
* Fixed issues found durring code review
- error in code comment
- using auto when the direct type would have been better
- removed ternary operators that were returning bool values
Signed-off-by: George Nash <george.nash@intel.com>
Co-authored-by: Chethan Palangotu Keshava <chethan.palangotu.keshava@intel.com>
Cleaning up some naming in the op kernel type control infrastructure.
"Supported types" was a bit semantically overloaded. Renamed it to "default types". They are the types that are supported by default.
Implement an alternate workaround for the LLVM x86 problem described in PR #5088. That change made the x86 assembly files build with the GNU assembler by using -fno-integrated-as
Implemented following change to avoid the error when using both --use_external_data_form and --precision int8 with GPT2LMHeadModel, which results in
line 161, in save_external_data; open(external_data_file_path, 'ab').close()
FileNotFoundError: [Errno 2] No such file or directory:
This may also be related to the identified bug #6047.
* add config allow_spinning
* add config allow_spinning
* set true as default
* split configures for inter and intra ops
Co-authored-by: Randy Shuai <rashuai@microsoft.com>
* Change msbuild condition for UAP
* update .netcore target as well
* create nuget packages with _native path
* validate path under _native directory for windowsai package
* pep8
* add diagnostic error message
* pep8
* use baseame
* lib\uap10.0
* uap10
* build\\uap10.0
* Manually binplace winmds into appx when PackageReference is used.
* always binplace winmd regardless of packagereference since c# should work with packages.config also
* resolve all paths to full paths to avoid some reference warnings
* move winmds out of lib folder to prevent automatic component registration
Co-authored-by: Sheil Kumar <sheilk@microsoft.com>
In the previous shared providers there aren't many OpKernel classes, and the existing Provider_OpKernel wrapper was fine. With the opposibility of making Cuda a shared provider, having this need to be changed per OpKernel adds a lot of complexity.
It was fairly straightforward to make OpKernel work with shared providers with minimal changes.
In this change, the ONNX_OPERATOR_* macros can also be shared with the shared providers.