Update clang-tidy config to prepare for creating a CI workflow to run
clang-tidy.
Added clangtidy check in CI
Co-authored-by: Edward Chen <18449977+edgchen1@users.noreply.github.com>
### Description
Kernels like Attention, BatchNormalization15, etc, can be implemented by
using multiple DML APIs. This PR paves the path for graph-based kernel
implementation.
As part of this PR, every kernel in DML EP will now wrap their
DML_OPERATOR_DESC into a graph and send it to FusedGraphKernel.
FusedGraphKernel will stich this smaller graph into its main DML_GRAPH.
All onnxconformance test and Winml model tests passed.
Co-authored-by: Sumit Agarwal <sumitagarwal@microsoft.com>
Co-authored-by: Dwayne Robinson <dwayner@microsoft.com>
### Description
<!-- Describe your changes. -->
A fix for parity issue in huggingface bart model with beam search
https://github.com/microsoft/onnxruntime/pull/12779
### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
### Description
<!-- Describe your changes. -->
Add handling for variadic inputs/outputs in a function.
### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
#13121
**Description**:
Use the onnx headers to find the latest opset for each operator. This
allows the script to detect optimizers with
`graph_utils::IsSupportedOptypeVersionAndDomain` calls that need
updating when run during the update of the onnx commit id. Without this
change issues are not detected until a new kernel is registered.
**Motivation and Context**
Detect optimizers that need updates as part of the ONNX update process.
For below code in some transformers models:
```
fused_qkv = fused_qkv.view(batch_size, seq_length, self.num_heads, 3, self.head_dim)
return fused_qkv[..., 0, :], fused_qkv[..., 1, :], fused_qkv[..., 2, :]
```
The exported graph will contains 3 Gather nodes, currently ORT's
GatherGrad CUDA implementation is slow. This pattern can be fused to use
one Split, so that we can launch less kernels for the compute, the perf
of Split/Concat (for grad) is also better than Gather/GatherGrad.
In a real example, one GatherGrad will take 15ms and there are 3 for
each layer in the graph, after the fusion, one Concat takes only 35us.
The total time of a step is improved from 1.5s to 0.4s.
### Description
<!-- Describe your changes. -->
fix migraphx ci pipeline failed problem.
Disabled MIGraphX pipeline now. It will be Enabled when this PR merge.
### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
### Description
We fix iGPU Unit and Python tests with this PR
We add packaging pip pkg to build Many Linux DockerFile
### Motivation and Context
This change is required to make sure iGPU Unit Test/Python Tests with OV
are fixed
- If it fixes an open issue, please link to the issue here. -->
Co-authored-by: shamaksx <shamax.kshirsagar@intel.com>
Co-authored-by: mayavijx <mayax.vijayan@intel.com>
Co-authored-by: pratiksha <pratikshax.bapusaheb.vanse@intel.com>
Co-authored-by: pratiksha <mohsinx.mohammad@intel.com>
Co-authored-by: Sahar Fatima <sfatima.3001@gmail.com>
Co-authored-by: Preetha Veeramalai <preetha.veeramalai@intel.com>
Co-authored-by: nmaajidk <n.maajid.khan@intel.com>
Co-authored-by: Mateusz Tabaka <mateusz.tabaka@intel.com>
Previously OnnxSequence would flatten out a list of tensors into a
single output array assuming they were all scalar values. This doesn't
accurately represent the semantics of an ONNX sequence, but was what the
semantics appeared to be years ago when I first wrote that class. This
PR changes it so that the `getValue` method on `OnnxSequence` unwraps
the sequence and returns `List<? extends OnnxValue>` allowing the user
to process the individual ONNX values separately. It's done this way
rather than returning a multidimensional array for a tensor and a Java
map for a map as multidimensional arrays are very inefficient in Java
and best practice when operating with a OnnxTensor in Java is to use a
`java.nio.ByteBuffer`. So allowing users to access each `OnnxTensor`s
individually allows them to control how the data is materialised on the
Java heap.
**Description**: Describe your changes.
This allow us quickly launch a microbench session by, for example:
`python skip_layer_norm_test.py 8 128 128 float32 `
Change ROCm to use tunable GEMM. It is not enabled in this PR. This will drastically improve GEMM performance in some shapes and dtypes configuration. This will benefit the overall performance for BERT inference and hopefully, training, when enabled.
### Description
<!-- Describe your changes. -->
As title
-Split long OpBuilder and OpSupportChecker files into individual
operator files.
-Add OpBuilder/SupportChecker registry factories.
-Combine the functionality of op_builder and op_support_checker into one
op_builder.
### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
The NNAPI OPBuilder was splitted into OPBuilder (For EP::Compile) and
OPSupportChecker (for EP::GetCapability)
At the time it was reasonable choice, but OPBuilder/OPSupportChecker
share some logic and has to use addition helper.
Clean up now to make NNAPI OPBuilder/OPSupportChecker into single
OPBuilder (similar to what CoreML EP has)
### Description
<!-- Describe your changes. -->
Update React Native documentation to reflect change to use full ORT. Fix
broken links.
### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
ORT v1.13 uses the full ORT package. Instructions for performing a
custom build did not cover this.
Co-authored-by: Edward Chen <18449977+edgchen1@users.noreply.github.com>
**Description**: Describe your changes.
add SkipLayerNorm vectorize regular case
1. when hidden size <= 1024, SkipLayerNormTunable op can use both small
case and regular case
2. when hidden size > 1024, SkipLayerNormTunable op can only use regular
case.
**Motivation and Context**
- Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here.
This PR is to fix https://github.com/microsoft/onnxruntime/issues/12930
and https://github.com/microsoft/onnxruntime/issues/12579.
In detail:
- For CPU EP, since current impl of SimplifiedLayerNormalization doesn't
support input and scale having different data types, so if the sub-graph
contains Cast Op, the sub-graph will not fused, this guarantee that both
inputs and output data type will be same
- For CUDA EP, add (fp16, float) support to (T,V) type constraints all
combinations of fp16 and float can be supported in the impl
With the fix, the original model can be run with
SimplifiedLayerNormalization, which also helps to improve the perf.
Fix issue that all nodes inputs are added as sub-graph inputs event the input does not exist.
Solution:
Skip the placeholder inputs while adding node inputs as sub-graph inputs. E.g Onnx node test test_resize_upsample_scales_linear, 2nd input roi is empty.
Fixesmicrosoft/onnxruntime#12969
### Motivation and Context
Build is broken, can't find cudnn.lib with nvidia official install of
cuDNN
Alternative method is to use `IF(EXISTS
${onnxruntime_CUDNN_HOME}/lib/x64/cudnn.lib)` to test for legacy
location and only add the legacy dir to the path, else add the current
official `lib/` dir.