### Description
Adds the extra option `QDQKeepRemovableActivations` to optionally
prevent automatic removal of Clip/Relu ops in QDQ models. The current
default behavior, which is to remove Clip/Relu, remains the same if the
new option is not enabled.
### Motivation and Context
Explicitly representing these Relu/Clip operators in the QDQ model is
necessary if optimizations or EP transformations will later remove
QuantizeLinear/DequantizeLinear operators from the model.
### Motivation and Context
The Intel NPU does not support 16-bit int quantized operators.
Consequently, the execution provider removes the
QuantizeLinear/DeQuantizeLinear (Q/DQ) operators from node units and
executes the operation as FP16 in the backend. However, if a Clip
operator was fused into a Q operator in the node unit, the removal of
Q/DQ operators results in inaccuracies because the effect of the
original Clip operators is lost.
Consider the following example:
- FP32 model: -> Op_FP32 -> Clip ->
- QDQ model: -> (DQ-> Op_FP32 -> Q) -> (DQ' -> Clip -> Q') ->
- After ClipQuantFusion: -> (DQ-> Op_FP32 -> Q) -> (DQ' -> Q') ->
- Intel Execution Provider strips Q/DQ: -> Op_FP16 ->
To solve this issue, we have enabled ClipQuantFusion exclusively on the
CPU execution provider.
### Description
- This PR combine all CUDA 12 stage into the Zip-nuget-... pipeline.
- It also enables the cuda12 support
### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
### Description
The InsertGatherBeforeSceLoss optimization is enabled when the density
of label padding less than 90%. We need to check the density of the
label padding to decide whether enable the optimization.
Before this pr, we just check the inputs of graph and correlate one with
the SCE node by iterate graph from the SCE node back to one graph input.
This is hard to be general because there may be complicated pattern
between graph input and SCE node.
This pr check padding density by the direct input of SCE module rather
than the input of graph at the first graph execution when exporting onnx
graph.
And if the density < 90%, insert a flag PythonOp after the SCE node as:
```
SoftmaxCrossEntropy
|
PythonOp (func_name: FlagAndPrintDensity) (insert if density < 90%)
|
Following graph
```
When the InsertGatherBeforeSceLoss is invoked, it check if there is the
flag PythonOp(func_name: FlagAndPrintDensity) after the SCE node and if
it is, remove it and do the padding elimination optimization.
If the env of ORTMODULE_PRINT_INPUT_DENSITY is 1, we will print input
density each step by the PythonOp (func_name: FlagAndPrintDensity). In
this case the PythonOp will not be removed.
### Description
<!-- Describe your changes. -->
during VITSIAI shared library load, set unload to false to prevent crash
when linux lib load not successfully
### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
for Linux environment, when library not loaded successfully, it will end
up with crash without giving any useful message.
the fix is to prevent the crash and give the useful message when shared
library not loaded correctly.
### Description
Make the operator more flexible:
(1) Decouple the max sequence length of rotary cache, kv cache and block
mask. They are allowed to have different values.
(2) Replace block_mask dense by CSR format (block_row_indices and
block_col_indices) to improve performance.
(3) Mark past_key and past_value as required inputs since we need them
to compute the shape of present_key and present_value.
### Motivation and Context
(1) LongRoPE has short and long rotary cache, which has different
length.
(2) Most users do not have enough GPU memory to run maximum sequence
length 128K. This change allows user to use smaller kv cache length to
test without hitting out of memory.
### Improve perf for mem efficient grad mgmt
When memory efficient gradient mangement feature is enabled, the weight
retrieval PythonOp for every layers will be launched at the beginning of
the forward, which would make GPU stream idle for few milliseconds. The
reason is the ReversedDFS ordering cannot ALWAYS handle such input
branching well, so we introduce a distantance-to-input_leaf concepts
when doing the reversedDFS, which not only move the problematical
PythonOp to the place where it is needed, but also those Cast ops
following the weight retrieval to the place where it is needed.
Main branch: 102.19 - 26.35s = 75.84s for 260 steps(4627samples),
61.04sample/second
This PR: 100.28s - 25.10s = 75.18s for 260 steps. 61.54samples/second
(+0.8% gains)
Main branch:

This PR:

### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
### Description
orttrainingtestdatascus has only save mnist whose size is only 64M in
Azure File
To meet security requirements and reduce maintenance cost, move the test
data to lotusscus and saved in Azure blob.
### Description
Add support for using Onnx Runtime with Node
### Motivation and Context
Onnx Runtime supports the QNN HTP, but does not support it for Node.js.
This adds baseline support for the Onnx Runtime to be used with Node.
Note it does not update the node packages that are distributed
officially. This simply patches the onnxruntime.dll to allow 'qnn' to be
used as an execution provider.
Testing was done using the existing onnxruntime-node package. The
`onnxruntime.dll` and `onnxruntime_binding.node` were swapped into
`node_modules\onnxruntime-node\bin\napi-v3\win32\arm64` with the newly
built version, then the various QNN dlls and .so files were placed next
to the onnxruntime.dll. Testing was performed on a variety of models and
applications, but the easiest test is to modify the [node quickstart
example](https://github.com/microsoft/onnxruntime-inference-examples/tree/main/js/quick-start_onnxruntime-node).
This pull request primarily involves changes to the build scripts in the
`tools/ci_build/github/azure-pipelines` directory. The changes add build
date and time information to the build process. This is achieved by
introducing two new parameters, `BuildDate` and `BuildTime`, and
incorporating them into the `msbuildArguments` in multiple locations.
Addition of new parameters:
*
[`tools/ci_build/github/azure-pipelines/templates/c-api-cpu.yml`](diffhunk://#diff-00815920cc190d10fdebceac0c3a4b8a59e408684ae38177dfe7f96cae276c59R309-R310):
Added `BuildDate` and `BuildTime` parameters using the pipeline's start
time.
Incorporation of new parameters in `msbuildArguments`:
*
[`tools/ci_build/github/azure-pipelines/c-api-noopenmp-packaging-pipelines.yml`](diffhunk://#diff-efb530efd945fdd9d3e1b92e53d25cc8db7df2e28071c364b07a7193092de01bL947-R948):
Added `CurrentDate` and `CurrentTime` parameters to `msbuildArguments`
in multiple locations.
[[1]](diffhunk://#diff-efb530efd945fdd9d3e1b92e53d25cc8db7df2e28071c364b07a7193092de01bL947-R948)
[[2]](diffhunk://#diff-efb530efd945fdd9d3e1b92e53d25cc8db7df2e28071c364b07a7193092de01bL1092-R1093)
[[3]](diffhunk://#diff-efb530efd945fdd9d3e1b92e53d25cc8db7df2e28071c364b07a7193092de01bL1114-R1115)
[[4]](diffhunk://#diff-efb530efd945fdd9d3e1b92e53d25cc8db7df2e28071c364b07a7193092de01bL1137-R1138)
*
[`tools/ci_build/github/azure-pipelines/templates/c-api-cpu.yml`](diffhunk://#diff-00815920cc190d10fdebceac0c3a4b8a59e408684ae38177dfe7f96cae276c59L446-R448):
Incorporated the `CurrentDate` and `CurrentTime` parameters into
`msbuildArguments`.### Description
<!-- Describe your changes. -->
### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
### Description
When sequence length is 128K, block_mask has 2048 rows, that is not
supported by previous kernel.
(1) Add a new kernel to handle more than 1024 rows, and each thread need
handle two rows.
(2) Add a test for sequence length 128k.
- Update method for uploading to Azure storage to use managed identity.
- Allow helper script tasks to be split across different calls.
- Rewrite helper script in Python.
Motivation:
Recently the Azure storage account configuration was changed and now the old way of uploading to it no longer works.
### Description
This PR fixes the dimension checks for the cos/sin caches used in the
rotary embeddings in the `SparseAttention` operator.
### Motivation and Context
This PR ports over the same changes from [this
PR](https://github.com/microsoft/onnxruntime/pull/20547) for
`GroupQueryAttention`.
### Description
Create numpy arrays based on the native buffers of returned OrtValues.
Hold on to the OrtValue until the numpy array is garbage collected.
### Motivation and Context
This saves cpu on tensor copies and addresses customer concerns.
### Description
<!-- Describe your changes. -->
optimize the GQA implementation on CPU. Mainly optimization are:
1. compute attention on real total sequence length instead of maximum
sequence length in case past/present share same buffer
2. remove the mask
3. remove the transpose after attention x value
It improve the phi3 model
https://github.com/microsoft/onnxruntime-genai/blob/main/examples/python/phi3-qa.py
with max sequence length 2k/4k from 10 tps to 20 tps.
### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
### Description
Fix a few issues in GQA:
(1) memory efficient attention does not have bfloat16, need disable it
when bfloat16 is used.
(2) When prompt length is 1, it is not classified as prompt.
(3) Fix benchmark_gqa.py
(4) Add a comment about seqlen_k to avoid confusion.
### Motivation and Context
https://github.com/microsoft/onnxruntime/pull/20279
### Description
<!-- Describe your changes. -->
### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
Patching in fast match disabled in the MIGraphX Compile stage in the
MIGraphX EP
### Description
Allow the MIGraphX API to compile the program given to the EP to turn
off fast math by default.
### Motivation and Context
Fixes accuracy issue we're seeing with GELU parity tests. Without fast
math disabled GELU will use a faster but less numerically stable version
which trades speed for accuracy.
Co-authored-by: Ted Themistokleous <tedthemistokleous@amd.com>
### Description
<!-- Describe your changes. -->
Currently figuring out if the protobuf dependency is building protoc it
is a little obtuse and inconsistent
* in some places we directly set protobuf_BUILD_PROTOC_BINARIES to OFF
to indicate the protobuf dependency is not building protoc
* e.g. macOS/iOS/visionOS builds
* for a user provided protoc path we don't set
protobuf_BUILD_PROTOC_BINARIES, and inside protobuf_function.cmake that
determines if `protobuf::protoc` is added as a dependency or not
*
0dda8b0c44/cmake/external/protobuf_function.cmake (L40-L45)
To be more consistent/explicit, set protobuf_BUILD_PROTOC_BINARIES to
OFF when ONNX_CUSTOM_PROTOC_EXECUTABLE set and valid.
Remove outdated script that built and external protoc binary which was
used in later builds. The build setup will fetch a pre-built protoc so
there's no need for this additional build.
### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
Make it easier to figure out if protoc is coming from the protobuf
dependency.
### Description
There was a bug with gqa on cpu where on token case, with batch_size >
1, and with past_present_share_buffer off, the output would occasionally
contain nans. this pr fixes that. it also updates documentation and
fixes posid gen for rotary in cuda in prompt case.
### Motivation and Context
this pr solves the GQA CPU bug as well as updates the documentation and
makes seqlens_k irrelevant for prompt case, which is useful to prevent
user error.
Made some changes to the arm64x.cmake script to:
- handle edge case
- Enable Projects that include onnxruntime as submodule and build it, to
be able to build as x without causing onnxruntime build_as_x to fail.
### Description
<!-- Describe your changes. -->
Operator should not modify input tensors because they are managed by
framework and may be reused by other nodes.
### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
### Description
<!-- Describe your changes. -->
This PR supports profiling and tuning MoE Gemm kernels in the very first
run and store the best configuration to reuse in the following runs. The
Gemm id (the key to the config map, int64_t) is determined by num_rows,
gemm_n and gemm_k for each type.
First 32 bits are total_rows, next 16 bits are gemm_n, next 16 bits are
gemm_k
int64_t key = total_rows;
key = key << 16 | gemm_n;
key = key << 16 | gemm_k;
Mixtral-fp16 on 2 A100 with tp=2. batch size = 1, seq_len = 1k
| | Prompt | Token |
| :--- | :---: | ---: |
| before | 138ms | 16.4ms |
| after | 100ms | 13.9ms |
### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
### Fix missing node during mem efficient topo sort
Some nodes are not cusumed by the backward path, they are also not
generating graph outputs. We missed those nodes, so this PR fix that and
add related tests.
A side note: we should remove those nodes that are not used for
computing any graph outputs in a graph transformer. (TODO)
### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
### Description
I misunderstood how UpdateCUDAProviderOptions and
UpdateTensorRTProviderOptions work in the C API, I had assumed that they
updated the options struct, however they re-initialize the struct to the
defaults then only apply the values in the update. I've rewritten the
Java bindings for those classes so that they aggregate all the updates
and apply them in one go. I also updated the C API documentation to note
that these classes have this behaviour. I've not checked if any of the
other providers with an options struct have this behaviour, we only
expose CUDA and TensorRT's options in Java.
There's a small unrelated update to add a private constructor to the
Fp16Conversions classes to remove a documentation warning (they
shouldn't be instantiated anyway as they are utility classes containing
static methods).
### Motivation and Context
Fixes#20544.
### Description
Follow up of https://github.com/microsoft/onnxruntime/pull/20216 to add
sparse attention kernel compiled by Triton for H100 (sm90).
- [x] Refine sparse attention v1 kernel compilation (remove some
combinations)
- [x] compile kernels for v1 kernels
- [x] compile kernels for H100
- [x] run performance tests
### Performane
Test setting `batch_size=4, num_heads=32, max_seq_len=8192,
head_size=128, sparse_block_size=64, local_blocks=16, vert_stride=8,
num_layout=8`
We compare sparse attention to corresponding GQA with local attention
windows size 1024, or GQA with dense causal. Note that ORT-GQA-Dense has
more computation than ORT-SparseAtt, while ORT-GQA-Local has less
computation (no vertial strides) than ORT-SparseAtt. They are added for
reference. It is not fair comparison, but could show the benefit of
sparsity vs dense.
Example results in Azure Standard_ND96isr_H100_v5 VM with NVIDIA
H100-80GB-HBM3 GPU (sm=90):
```
prompt-sm90-batch4-head32-d128-local16-vert8-torch.float16:
sequence_length TORCH-GQA ORT-GQA-Dense ORT-GQA-Local ORT-SparseAtt
0 16.0 0.079877 0.006362 0.006403 0.042758
1 32.0 0.086920 0.016404 0.016686 0.044183
2 64.0 0.090727 0.020429 0.020409 0.045343
3 128.0 0.128148 0.032009 0.031984 0.051516
4 256.0 0.323933 0.074110 0.073920 0.068308
5 512.0 1.021856 0.162167 0.161951 0.109226
6 1024.0 3.596002 0.452629 0.452780 0.231653
7 2048.0 13.865088 1.499534 1.195749 0.515488
8 4096.0 0.000000 5.454785 2.669682 1.163233
9 8192.0 0.000000 22.068159 6.018604 2.772873
token-sm90-batch4-head32-d128-local16-vert8-torch.float16:
past_sequence_length TORCH-GQA ORT-GQA-Dense ORT-GQA-Local ORT-SparseAtt
0 16.0 0.104460 0.012652 0.012661 0.069549
1 32.0 0.113866 0.012776 0.012765 0.069024
2 64.0 0.124600 0.016791 0.012672 0.069397
3 128.0 0.108658 0.017900 0.018294 0.074844
4 256.0 0.115463 0.029409 0.029608 0.078911
5 512.0 0.149824 0.033968 0.033701 0.092998
6 1024.0 0.234050 0.042930 0.042951 0.116920
7 2048.0 0.390695 0.061462 0.043008 0.121555
8 4096.0 0.000000 0.097505 0.042948 0.134757
9 8191.0 0.000000 0.165861 0.043542 0.158796
```
The following might be able to help performance on short sequence
length. Need update operator spec:
Fall back to flash attention when total_sequence length < local_blocks * block_size
### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
### Description
As a follow-up of #20506
### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
### Description
<!-- Describe your changes. -->
with past/present shared same buffer, the present seq length is
different with total sequence length. The size of cos/sin cache should
be checked with sequence length.
### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
### Description
- Adds support for float32/float16 HardSigmoid on HTP backend.
Decomposes `HardSigmoid(X)` into `max(0, min(1, alpha * X + beta))`.
- Fuses the sequence `X * HardSigmoid<alpha=1/6, beta=0.5>(X)` into a
single `HardSwish(x)`. Only applies to non-quantized HardSigmoid/Mul.
### Motivation and Context
QNN does not natively support HardSigmoid. These changes expand model
support on QNN EP.
### Description
<!-- Describe your changes. -->
This PR registers DFT-20 to the DML EP.
### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
### Description
- Updates QNN pipelines to use QNN SDK 2.21
- Downloads QNN SDK from Azure storage to avoid having to rebuild images
when a new version is released.
### Motivation and Context
Test with the latest QNN SDK.
### Description
Follow up of #20216 to add kernel for sm=75 (GPU like T4, Geforce RTX
2080, GeForce GTX 1650 Ti, NVIDIA TITAN RTX, RTX 4000 etc)
- [x] Add kernel for sm=75
- [x] Update dispatch code to use sm to call different kernel.
- [x] Update compile script to use num_stages=2 instead of 3 for sm=75
- [x] Refactor test script and add tests for bfloat16.
- [x] Fix performance test of token generation (previously we did not
concatenate past_key)
- [x] Fix debug build
- [x] Run performance test and update numbers.
For sm=70, the v1 kernel can be compiled but there is error in compiling
v2 kernel. So it is skipped in this pull request.
Performance Test on T4 GPU (using Standard_NC4as_T4_v3 Azure VM) with
`batch_size=4, num_heads=32, max_seq_len=8192, head_size=128,
sparse_block_size=64, local_blocks=16, vert_stride=8, num_layout=8`
We compare sparse attention to corresponding GQA with dense causal. Note
that GQA with dense need more computation since no sparsity is used. The
TORCH-GQA use naive implementation (using cuSPARSE Block-SpMM could be
faster).
```
prompt-sm75-batch4-head32-d128-local16-vert8-torch.float16:
sequence_length TORCH-GQA ORT-GQA-Dense ORT-SparseAtt
1 32.0 0.184173 2.994347 0.089064
2 64.0 0.303300 3.023986 0.107418
3 128.0 0.887795 3.073728 0.174213
4 256.0 2.797654 3.246899 0.357869
5 512.0 10.055048 3.814039 0.893903
6 1024.0 37.849937 5.818439 2.658720
7 2048.0 148.641785 13.638480 7.202690
8 4096.0 OOM 43.556847 17.680954
9 8192.0 OOM 161.628540 44.336670
token-sm75-batch4-head32-d128-local16-vert8-torch.float16:
past_sequence_length TORCH-GQA ORT-GQA-Dense ORT-SparseAtt
1 32.0 0.110353 2.996305 0.137509
2 64.0 0.145088 3.006860 0.165424
3 128.0 0.219500 3.036448 0.192001
4 256.0 0.347496 3.071341 0.249125
5 512.0 0.595842 3.135225 0.398726
6 1024.0 1.081216 3.261110 0.612744
7 2048.0 2.060307 3.515578 0.685670
8 4096.0 OOM 4.022986 0.819707
9 8191.0 OOM 5.024528 1.072912
```
### Motivation and Context
To inference Phi-3-small in T4 GPU
### Description
<!-- Describe your changes. -->
This branch is based on rel-1.18.0 and supports TensorRT 10-GA.
### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->