onnxruntime/cmake/external
Julius Tischbein 1391354265
Adding CUDNN Frontend and use for CUDA NN Convolution (#19470)
### Description
Added CUDNN Frontend and used it for NHWC convolutions, and optionally
fuse activation.

#### Backward compatible 
- For model existed with FusedConv, model can still run. 
- If ORT is built with cuDNN 8, cuDNN frontend will not be built into
binary. Old kernels (using cudnn backend APIs) are used.

#### Major Changes
- For cuDNN 9, we will enable cudnn frontend to fuse convolution and
bias when a provider option `fuse_conv_bias=1`.
- Remove the fusion of FusedConv from graph transformer for CUDA
provider, so there will not be FusedConv be added to graph for CUDA EP
in the future.
- Update cmake files regarding to cudnn settings. The search order of
CUDNN installation in build are like the following:
  * environment variable `CUDNN_PATH`
* `onnxruntime_CUDNN_HOME` cmake extra defines. If a build starts from
build.py/build.sh, user can pass it through `--cudnn_home` parameter, or
by environment variable `CUDNN_HOME` if `--cudnn_home` not used.
* cudnn python package installation directory like
python3.xx/site-packages/nvidia/cudnn
  * CUDA installation path

#### Potential Issues

- If ORT is built with cuDNN 8, FusedConv fusion is no longer done
automatically, so some model might have performance regression. If user
still wants FusedConv operator for performance reason, they can still
have multiple ways to walkaround: like use older version of onnxruntime;
or use older version of ORT to save optimized onnx, then run with latest
version of ORT. We believe that majority users have moved to cudnn 9
when 1.20 release (since the default in ORT and PyTorch is cudnn 9 for 3
months when 1.20 release), so the impact is small.
- cuDNN graph uses TF32 by default, and user cannot disable TF32 through
the use_tf32 cuda provider option. If user encounters accuracy issue
(like in testing), user has to set environment variable
`NVIDIA_TF32_OVERRIDE=0` to disable TF32. Need update the document of
use_tf32 later.

#### Follow ups
This is one of PRs that target to enable NHWC convolution in CUDA EP by
default if device supports it. There are other changes will follow up to
make it possible.
(1) Enable `prefer_nhwc` by default for device with sm >= 70. 
(2) Change `fuse_conv_bias=1` by default after more testing.
(3) Add other NHWC operators (like Resize or UpSample).

### Motivation and Context

The new CUDNN Frontend library provides the functionality to fuse
operations and provides new heuristics for kernel selection. Here it
fuses the convolution with the pointwise bias operation. On the [NVIDIA
ResNet50](https://pytorch.org/hub/nvidia_deeplearningexamples_resnet50/)
we get a performance boost from 49.1144 ms to 42.4643 ms per inference
on a 2560x1440 input (`onnxruntime_perf_test -e cuda -I -q -r 100-d 1 -i
'prefer_nhwc|1' resnet50.onnx`).

---------

Co-authored-by: Tianlei Wu <tlwu@microsoft.com>
Co-authored-by: Maximilian Mueller <maximilianm@nvidia.com>
2024-08-02 15:16:42 -07:00
..
emsdk@d52c465201 [js/web] optimize module export and deployment (#20165) 2024-05-20 09:51:16 -07:00
git.Win32.2.41.03.patch Fix ability to use patch on Windows CI machines (#18356) 2023-11-11 07:32:14 +10:00
libprotobuf-mutator@7a2ed51a6b
onnx@595228d99e Update to onnx 1.16.1 (#20702) 2024-06-04 11:06:28 -07:00
abseil-cpp.cmake Update C++ dependencies (#21410) 2024-07-23 10:00:36 -07:00
abseil-cpp.natvis Update abseil to a release tag and register neural_speed (#19255) 2024-01-24 14:37:39 -08:00
composable_kernel.cmake [ROCm] Update ck to use ck_tile (#21030) 2024-06-19 14:06:10 +08:00
cuDNN.cmake Adding CUDNN Frontend and use for CUDA NN Convolution (#19470) 2024-08-02 15:16:42 -07:00
cudnn_frontend.cmake Adding CUDNN Frontend and use for CUDA NN Convolution (#19470) 2024-08-02 15:16:42 -07:00
cutlass.cmake [CUDA] upgrade cutlass to 3.5.0 (#20940) 2024-06-11 13:32:15 -07:00
dml.cmake Update DirectML from 1.14.1 to 1.15.0 (#21323) 2024-07-22 16:59:03 -07:00
dnnl.cmake Update oneDNN to v3.0.1 in order to support gcc 13 (#19344) 2024-02-01 15:39:03 -08:00
eigen.cmake Fix ability to use patch on Windows CI machines (#18356) 2023-11-11 07:32:14 +10:00
extensions.cmake Update C/C++ dependencies: abseil, date, nsync, googletest, wil, mp11, cpuinfo and safeint (#15470) 2023-09-08 13:35:04 -07:00
find_snpe.cmake Improve dependency management (#13523) 2022-12-01 09:51:59 -08:00
FindNumPy.cmake
helper_functions.cmake Update RE2 to the latest (#20775) 2024-05-23 14:30:15 -07:00
ipp-crypto.cmake
mimalloc.cmake Improve dependency management (#13523) 2022-12-01 09:51:59 -08:00
neural_speed.cmake turn on neural_speed by default (#19627) 2024-03-20 12:49:58 -07:00
onnx_minimal.cmake Fix some build issues on MacOS with Xcode 14.3. (#15878) 2023-06-07 12:07:11 -07:00
onnx_protobuf.natvis Fix visualization issues with Attribute/Tensor protos (#17188) 2023-08-16 13:56:51 -07:00
onnxruntime_external_deps.cmake Adding CUDNN Frontend and use for CUDA NN Convolution (#19470) 2024-08-02 15:16:42 -07:00
protobuf_function.cmake Fix some build issues on MacOS with Xcode 14.3. (#15878) 2023-06-07 12:07:11 -07:00
pybind11.cmake Improve dependency management (#13523) 2022-12-01 09:51:59 -08:00
pyxir.cmake
tvm.cmake [TVM EP] Support zero copying TVM EP output tensor to ONNX Runtime output tensor (#12593) 2023-02-08 10:02:20 -08:00
wil.cmake Rework WIL dependency retrieval/usage (#17130) 2023-08-15 09:11:46 -07:00
xnnpack.cmake Enable RISC-V 64-bit Cross-Compiling Support for ONNX Runtime on Linux (#19238) 2024-01-24 16:27:05 -08:00