onnxruntime/cmake/external
George Nash d9eeb48393
One dnn v2.6 update (#11220)
* Disable training code in DNNL LayerNorm code

The capability code already does not claim the LayerNorm and
SkipLayerNorm that require more than one output. However,
building with training enabled was causing issues.

The training specific code has been removed even when building with
training enabled.

Signed-off-by: George Nash <george.nash@intel.com>

* Fix for DNNL FusedMatMul op.
The bug was in the transpose code.

Signed-off-by: Chethan Palangotu Keshava <chethan.palangotu.keshava@intel.com>

* Use agreed upon memory format type when runnig Pooling Gradient in dnnl ep

The dnnl ep does not currently have a way to pass memory_format information
between the forward pooling primitive to the backward pooling primitive.

This change explicitly sets the memory_format to use match that of Onnxruntime.
For both the forward and backward pooling code. This will prevent using un-matched
memory format that could result in an `unimplemented` error from dnnl ep.

Signed-off-by: George Nash <george.nash@intel.com>

* Update dnnl ep to use OneDNN v2.6

Do not run ReduceInfLogSum on the kDnnlExecutionProvider due to a
calculation bug when doing Log or infinity valuse. The fix for this
issue will be part of the next OneDNN release.

Signed-off-by: George Nash <george.nash@intel.com>

* Update PrintMemory function in dnnl ep

This modification can be used to enable/disable memory printing
for dnnl ep develpers.  This is considered a developer only feature
and is disabled by default. It must be enabled and code recompiled
to use.

Even if it is enabled it will not actually print any memory because
the developer needs to take the extra step of spefifying the memory
that will be printed to the screen.

Signed-off-by: George Nash <george.nash@intel.com>

* Update binary ops to run on intel GPU when using dnnl ep

Binary ops (i.e. Add, Div, Mul, and Sub ) was updated to no longer
call GetMemoryAndReshape in the past this would move the memory from
CPU to the GPU.  This extra call is no longer needed since it is taken
care of by the GetMemoryInOrtFormat call. Removing the GetMemoryAndReshape
prevented copying the memory to GPU twice.

Signed-off-by: George Nash <george.nash@intel.com>

Co-authored-by: Chethan Palangotu Keshava <chethan.palangotu.keshava@intel.com>
2022-04-15 12:51:11 -07:00
..
cub@c3cceac115 add dependency 'cub' as submodule (#1924) 2019-09-26 16:10:39 +08:00
cxxopts@3c73d91c0b Introduce training changes. 2020-03-11 14:39:03 -07:00
date@e7e1482087 Initial bootstrap commit. 2018-11-19 16:48:22 -08:00
dlpack@2775088798 Fix to_dlpack Failure on PyTorch-1.10 (#9151) 2021-09-24 09:48:07 +08:00
eigen@d10b27fe37 Downgrade Eigen (#8817) 2021-08-23 18:06:23 -07:00
emsdk@fc645b7626 Upgrade emsdk to 3.1.3 (#10577) 2022-02-28 23:52:41 -08:00
flatbuffers@6df40a2471 Move flatbuffers to 1.12 release (#5392) 2020-10-07 09:23:03 -07:00
googlebenchmark@7d0d9061d8 add google benchmark as direct dependency (#7762) 2021-05-19 20:12:17 -07:00
googletest@53495a2a7d Update googletest to latest commit to fix build issues with GCC11 (#7984) 2021-06-08 16:06:53 -07:00
json@db78ac1d77 Use GCC 10 in Linux CPU CI pipeline (#7985) 2021-06-08 11:53:29 -07:00
libprotobuf-mutator@7a2ed51a6b Onnxruntime fuzzing (#4341) 2020-07-06 16:34:34 -07:00
mimalloc@f412df7a2b Enable proper override using MIMalloc (#9944) 2021-12-07 17:56:58 -08:00
mp11@21cace4e57 Op kernel type reduction infrastructure. (#6466) 2021-01-28 07:27:19 -08:00
nsync@436617053d Update nsync 2020-02-20 11:25:34 -08:00
onnx@850a81b0b7 update with onnx 1.11 release (#10441) 2022-03-07 21:10:55 -08:00
onnx-tensorrt@4f54a1950e update onnx-tensorrt to bring in https://github.com/onnx/onnx-tensorrt/pull/812 (#10810) 2022-03-08 14:51:07 -08:00
onnxruntime-extensions@d4b2aff0c8 Enable linking in exception throwing support library when build onnxruntime wasm. (#8973) 2021-09-10 22:09:16 +08:00
protobuf@0dab03ba7b Update protobuf submodule (#10801) 2022-03-09 09:37:58 -08:00
pytorch_cpuinfo@5916273f79 Adding pytorch cpuinfo as dependency (#8178) 2021-07-12 14:21:12 -07:00
re2@4244cd1cb4 Update C++ Standard from 14 to 17 (#8041) 2021-06-25 14:08:01 -07:00
SafeInt Revert to using release SafeInt repo now that it supports a build with exceptions disabled. (#5233) 2020-09-22 06:29:28 +10:00
tensorboard@373eb09e4c Introduce training changes. 2020-03-11 14:39:03 -07:00
wil@e8c599bca6 Add DirectML Execution Provider (#2057) 2019-10-15 06:13:07 -07:00
abseil-cpp.cmake Enable building with a GDK (#11126) 2022-04-07 15:06:31 -07:00
dml.cmake Update to 1.8.0 2021-11-19 04:44:32 -08:00
dnnl.cmake One dnn v2.6 update (#11220) 2022-04-15 12:51:11 -07:00
eigen.cmake apply eigen patch only for ACL. 2019-11-05 13:53:53 -08:00
extensions.cmake Enable selecting custom ops in onnxruntime-extensions. (#8826) 2021-08-27 21:45:52 -07:00
FindNumPy.cmake Initial bootstrap commit. 2018-11-19 16:48:22 -08:00
jemalloc.cmake Initial bootstrap commit. 2018-11-19 16:48:22 -08:00
mimalloc.cmake Enable proper override using MIMalloc (#9944) 2021-12-07 17:56:58 -08:00
onnx_minimal.cmake Enable transpose optimizer in minimal extended build (#10349) 2022-01-31 09:41:04 -08:00
pybind11.cmake Add static code analyzer to Windows CPU/GPU CI builds and fix the warnings (#7489) 2021-04-29 11:54:57 -07:00
pyxir.cmake Check for Python_EXECUTABLE in pyxir.cmake to fix Vitis AI EP build (#8631) 2021-08-24 08:39:50 -07:00
tvm.cmake [TVM EP] code refactor (#10655) 2022-03-16 13:55:04 +01:00
zlib.cmake Add .git suffix to github URL. 2022-01-03 14:38:35 -08:00