"core/graph/function.h" appears twice:
- `include/onnxruntime/core/graph/function.h`
- `onnxruntime/core/graph/function.h` --> This one is redundant and not used anywhere
Support for device function pointers is not yet available for ROCm.
Instead, the device function pointers were converted to device functors.
Case statements, lambdas, and macros are used for dispatch; as a result,
all combinations of kernels are compiled with inlined functors. The
basis of this approach can be found in PyTorch.
Lastly, hipify and register Resize and Upsample for ROCm EP.
The dnnl_binary ops need the memory format to match the format expected by
Onnxruntime. If the memory format of the inputs do not match each other
there will be an error in the calculated results.
Additionally, since the code manually pads the tensor dimensions for broadcasting
the inputs are expected to be in Onnxruntimes format.
Since detecting and reordering the memory to Ort format matches what was previously
done for the Reshape op the code was moved from dnnl_reshape to
dnnl_subgraph_primitive under the name GetMemoryInOrtFormat.
One small additional change made to the capability code log to also print the
percentage of nodes run by the dnnl execution provider.
Signed-off-by: George Nash <george.nash@intel.com>
```
Component for aggressive decoding. Find the bifurcation index of predicted tokens, between source tokens,
starting from previous suffix match index, and predicted tokens.
Concat predicted tokens, starting from bifurcation index, to the back
of current tokens. This forms the output tokens.
Detect suffix match index in source tokens, between source tokens and output tokens.
Detection is based on finding the appearances of last n-gram in output tokens
in source tokens.
A match is considered found if source tokens contain a single matching n-gram.
Return the index of the start of the n-gram in source tokens.
No matching if found if src tokens contain multiple or zero matching n-grams. Return -1.
```
Add kernels for QLinearConv with symmetric quantized filter, e.g., filter type is int8 and zero point of filter is 0. This PR includes kernels for avx2, avxvnni, avx512 and avx 512 vnni. Will adds kernels for ARM64 in following PR.
Kernels uses direct input buffer directly for pointwise, and in-direct buffer for depthwise and non-group conv.
The advantages of those new kernels are:
no need to compute the sum of each pixel output image, and sum/offset of filter can be combined with bias.
with in-direct buffer, im2col returns an array of buffer pointers instead of memcpy'ing the original data. This saves memcpy time and reduces the size of the intermediate buffer needed to hold the im2col transform. In the future, will compute im2col ahead of time for input with fixed input size.
* Exception when duplicated autograd.Function name detected
* reorder a bit for a bittle bit better perf
* fix a bug in previous PR :(
* correct the error message a bit
* Support opset-13 for squeeze, unsqueeze, maxpool, pad, cast, clip
* merge master and update a operators.md
* resolve comment. revise pool and cast kernel implementation.
* skip fusion when clip min and max is not in initializer
* re-hipify all rocm EP sources
* fix all other files affected by re-hipify
* add cuda_provider_factory.h to amd_hipify.py
* do not use cudnn_conv_algo_search in ROCm EP, missing reduce min registration
* Fix ReduceConsts template specialization introduced in #9101.
Fixes the error when building for ROCm 4.3.1:
error: too many template headers for onnxruntime::rocm::ReduceConsts<__half>::One (should be 0)
* fix flake8 error in amd_hipify.py
* speed up hipify with concurrent.futures
* flake8 fix in amd_hipify.py