Commit graph

7923 commits

Author SHA1 Message Date
PeixuanZuo
4eac0db3af
[ROCm] Add GemmFastGelu CK implementation (#13759)
### Description
<!-- Describe your changes. -->

Add GemmFastGelu CK implementation.

TODO 
1. The performance of CK GemmFastGelu in ORT is not good as using CK
directly, still need to investigate the reason and improve the CK in
ORT.
`GemmFastGeluUnfused float16 NN m=49152 n=3072 k=768 2298.8064 us 100.89
tflops`
`withbias DeviceGemmMultipleD_Xdl_CShuffle<256, 256, 128, 32, 8, 8,
Default> LoopScheduler: Default, PipelineVersion: v1 float16 NN m=49152
n=3072 k=768 2401.9799 us 96.56 tflops`

### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->

Co-authored-by: peixuanzuo <peixuanzuo@linmif39a000004.zvflicr54joexhdgnhvmxrxygg.phxx.internal.cloudapp.net>
2023-01-05 17:53:30 +08:00
Adrian Lizarraga
2b45410e52
Fix Prefast warning in CUDA contrib op (#14074)
### Description
Fixes Prefast C26814

```shell
onnxruntime::contrib::cuda::QAttention<onnxruntime::MLFloat16,signed char>::ComputeInternal
onnxruntime/contrib_ops/cuda/quantization/attention_quantization.cc
The const variable 'element_size' can be computed at compile-time. Consider using constexpr (con.5).
```
2023-01-04 19:32:06 -08:00
Adrian Lizarraga
68794d0ac1
Improve custom op library handle cleanup (#14099)
### Description
- Adds a new C API `OrtApi::RegisterCustomOpsLibrary_V2` that manages
the lifetime of dynamic library handles (i.e., calls `dlclose` or
`FreeLibrary`).
- Deprecates C API `OrtApi::RegisterCustomOpsLibrary`.
- Adds C++ API wrapper for convenient registering of custom op
libraries.
- `PySessionOptions` is now an alias of `OrtSessionOptions`

### Motivation and Context
The current API for registering custom op libraries loads dynamic
libraries but requires users to handle the release of the corresponding
library handles. Additionally, the user has to make sure to release the
library handle _after_ the session has been destroyed (or the program
segfaults).

The new API automatically cleans up the library and allows the user to
write more straightforward code.
2023-01-04 17:56:29 -08:00
cloudhan
dc997af695
Use RegisterOp to add Op instead of directly manipulate base class field (#14123)
Add API `RegisterOp` to TunableOp.
2023-01-05 09:02:46 +08:00
Nat Kershaw (MSFT)
b313055ad6
Updated issue router to migrated project (#14114) 2023-01-04 14:47:43 -08:00
Ye Wang
ae148ebc05
T5 skip_layer_norm cuda op (#14093)
### Description

T5 uses a layer_norm which only scales and doesn't shift, which is also
known as Root Mean Square Layer Normalization.
ORT already have the simplified_layer_norm which is the RMS layer_norm.
This PR extends this T5 layer_norm with support of skip/bias and the
residual output.
This new op is named SkipSimplifiedLayerNorm and has similar interface
as SkipLayerNorm but removes the beta as input


### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->

Co-authored-by: Ubuntu <wy@v100-2.0cdb2e52twzevn1i4fi45bylyg.jx.internal.cloudapp.net>
2023-01-04 13:31:53 -08:00
Patrice Vignola
b6ea60436d
[DML EP] Decouple the bucketized allocator from the individual block allocation logic (#14056)
### Description
Decouple the DML bucketized allocator from the individual block
allocation logic



### Motivation and Context
This is the first step into using tiled/placed resources instead of
committed resources. Given the potential impact of changing the
allocation logic and the large number of edge cases, I decided to take a
step-by-step approach. It will also reduce the size of the PRs to a
reasonable length, while making sure each PR has a single
responsibility.

Decoupling the logic that way will make it easier in the future to
easily plug in different kind of "suballocators" if we want to play
around with the allocation logic. Currently, the only suballocator is a
committed resource, but placed resources are the next step and will come
in a future PR.
2023-01-04 13:13:54 -08:00
Nat Kershaw (MSFT)
f344d4b3d1
Label issues with mobile when android or ios are present (#14033) 2023-01-04 13:03:25 -08:00
Ye Wang
821baa5b83
Support generation script with custom eos/pad token id (#14113)
### Description
<!-- Describe your changes. -->

when custom decoder onnx model passes in, user can specify eos/pad token
id instead of populating from torch config.


### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
2023-01-04 10:51:53 -08:00
Ashwini Khade
e5e3570ac5
fix cg issue (#14112)
### Description
Update torch version to 1.13.1 to fix CG issue:
https://dev.azure.com/aiinfra/ONNX%20Runtime/_workitems/edit/10666/
2023-01-04 09:07:13 -08:00
JiCheng
d279ce2f67
bug fix, NNAPI filter out un-supported graph (#14040)
### Description
"Consttant Folding" need to enhance to support "function" in onnx spec.
If those nodes are inlined into sub-graph and captured by a EP,
espeicially this EP doesn't support that, error occured.

There are many test failure in Onnx 1.13 agaist NNAPI, these are listed
bellow;
```
 prelu_broadcast_expanded
 selu_example_expanded_ver18
 layer_normalization_2d_axis0
 shrink_hard_expanded_ver18
 elu_expanded_ver18
 softsign_example_expanded_ver18
 leakyrelu_example_expanded
 hardsigmoid_example_expanded_ver18
 thresholdedrelu_default_expanded_ver18
 split_variable_parts_2d_opset18
efault_expanded
 prelu_example_expanded
 thresholdedrelu_example_expanded_ver18
 selu_default_expanded_ver18
 elu_example_expanded_ver18
 hardsigmoid_default_expanded_ver18
 softsign_expanded_ver18
 hardsigmoid_expanded_ver18
 leakyrelu_expanded
 scatter_with_axis
 selu_expanded_ver18
 shrink_soft_expanded_ver18
 relu_expanded_ver18
 thresholdedrelu_expanded_ver18
 elu_default_expanded_ver18
```


Solution: To prevent NNAPI capture it for now, we can revert it once a
better CF implemented.


### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
2023-01-04 20:20:00 +08:00
Vincent Wang
15c1157ef2
New Pattern Support for LayerNormFusion (#14118)
Latest torch exporter changed the LayerNorm exporting code to add two
more Cast nodes (to make it logically correct in compute), but our
current LayerNormFusion doesn't support the new pattern. The PR is to
add support of this.
2023-01-04 17:51:14 +08:00
Yi Zhang
f864b54393
Use today's cache only (#14120)
### Description
Add date value of today into the cache key.

### Motivation and Context
Microsoft-host agent has only 10GB for build.
To limit cache size, pipeline only use cache generated today.
2023-01-04 17:48:52 +08:00
dependabot[bot]
bdeba4e31c
Bump json5 from 1.0.1 to 1.0.2 in /js (#14109) 2023-01-04 08:54:59 +00:00
Baiju Meswani
0ff61f7b97
Update torch to 1.13.1 in CI and packaging pipelines for ort training (#14055) 2023-01-03 20:03:33 -08:00
cao lei
b29a1c7348
Address follow-up comments on multistream pr #13495 (#13992)
### Description
This PR is to address follow-up comments for the multi-stream pr
https://github.com/microsoft/onnxruntime/pull/13495

Changes including:

- Make StreamAwareArena transparent to minimal build
- Make DeviceStreamCollection transparent to minimal build
- Replace ORT_MUST_USE_RESULT with [[nodiscard]]
- Remove unnecessary shared_ptr


### Motivation and Context
This PR is to address follow-up comments for the multi-stream pr
https://github.com/microsoft/onnxruntime/pull/13495

Co-authored-by: Lei Cao <leca@microsoft.com>
2023-01-03 16:33:36 -08:00
Nat Kershaw (MSFT)
a78ab4fbef
Add labeled to docs workflow trigger (#13979)
Capture the case where issue is manually labeled
2023-01-03 15:19:22 -08:00
Ashwini Khade
68b5b2d7d3
Refactor training build options (#13964)
### Description
1. Renames all references of on device training to training apis. This
is to keep the naming general. Nothing really prevents us from using the
same apis on servers\non-edge devices.
2. Update ENABLE_TRAINING option: With this PR when this option is
enabled, training apis and torch interop is also enabled.
3. Refactoring for onnxruntime_ENABLE_TRAINING_TORCH_INTEROP option: 
   -  Removed user facing option
- Setting onnxruntime_ENABLE_TRAINING_TORCH_INTEROP to ON when
onnxruntime_ENABLE_TRAINING is ON as we always build with torch interop.

Once this PR is merged when --enable_training is selected we will do a
"FULL Build" for training (with all the training entry points and
features).
Training entry points include:
1. ORTModule
2. Training APIs

Features include:
1. ATen Fallback
2. All Training OPs includes communication and collectives
3. Strided Tensor Support
4. Python Op (torch interop)
5. ONNXBlock (Front end tools for training artifacts prep when using
trianing apis)

### Motivation and Context
Intention is to simply the options for building training enabled builds.
This is part of the larger work item to create dedicated build for
learning on the edge scenarios with just training apis enabled.
2023-01-03 13:28:16 -08:00
Hariharan Seshadri
d43e0ec9ba
Misc transformer fixes (#14103)
### Description
1. SkipLayerNormalization has a new output
(https://github.com/microsoft/onnxruntime/pull/13988) and the symbolic
shape inference script needs corresponding updates

2. The greedy sampling op
(https://github.com/microsoft/onnxruntime/pull/13426) shouldn't re-use
the logits buffer as its corresponding kernel doesn't seem to support it
yet.

### Motivation and Context
Fix some transformer issues
2023-01-03 13:05:55 -08:00
Patrice Vignola
e7f9d40dde
[DML EP] Force instance norm inputs to be 4D to better target metacommands (#14020)
### Description
Force instance norm inputs to be 4D to better target metacommands

### Motivation and Context
This may improve performance on some hardware by allowing the driver to
return valid layouts to DML when querying for metacommand support.
2023-01-03 12:47:10 -08:00
Patrice Vignola
589612106a
[DML EP] Force layer norm inputs to be 4D to better target metacommands (#14022)
### Description
Force layer norm inputs to be 4D to better target metacommands

### Motivation and Context
This may improve performance on some hardware by allowing the driver to
return valid layouts to DML when querying for metacommand support.
2023-01-03 12:46:33 -08:00
RandySheriffH
587e891cae
CloudEP (#13855)
Implement CloudEP for hybrid inferencing.
The PR introduces zero new API, customers could configure session and
run options to do inferencing with Azure [triton
endpoint.](https://learn.microsoft.com/en-us/azure/machine-learning/how-to-deploy-with-triton?tabs=azure-cli%2Cendpoint)
Sample configuration in python be like:

```
sess_opt.add_session_config_entry('cloud.endpoint_type', 'triton');
sess_opt.add_session_config_entry('cloud.uri', 'https://cloud.com');
sess_opt.add_session_config_entry('cloud.model_name', 'detection2');
sess_opt.add_session_config_entry('cloud.model_version', '7'); // optional, default 1
sess_opt.add_session_config_entry('cloud.verbose', '1'); // optional, default '0', meaning no verbose
...
run_opt.add_run_config_entry('use_cloud', '1') # 0 for local inferencing, 1 for cloud endpoint.
run_opt.add_run_config_entry('cloud.auth_key', '...')
...
sess.run(None, {'input':input_}, run_opt)
```

Co-authored-by: Randy Shuai <rashuai@microsoft.com>
2023-01-03 10:03:15 -08:00
Yi Zhang
52e3fe961d
add dnnl dependency in unittest.cmake (#14104)
### Description
It's from the PR #14085 
On multiple running msbuilds , it throws the exception of
```
22-12-30T16:35:34.2423207Z ##[error]C:\Program Files (x86)\Microsoft Visual Studio\2019\Enterprise\MSBuild\Microsoft\VC\v160\Microsoft.CppCommon.targets(155,5): Error MSB3073: The command "setlocal
"C:\Program Files\CMake\bin\cmake.exe" -E copy D:/a/_work/1/b/RelWithDebInfo/dnnl/install/bin/dnnl.dll D:/a/_work/1/b/RelWithDebInfo/RelWithDebInfo
if %errorlevel% neq 0 goto :cmEnd
:cmEnd
endlocal & call :cmErrorLevel %errorlevel% & goto :cmDone
:cmErrorLevel
exit /b %1
:cmDone
if %errorlevel% neq 0 goto :VCEnd
:VCEnd" exited with code 1.
```

https://dev.azure.com/onnxruntime/onnxruntime/_build/results?buildId=847423&view=logs&j=249e9d58-0012-5814-27cf-6a201adbd9cf&t=182b9780-832e-5dcb-3957-d6aa3ece582f
It should make sure that the onnxruntime_test_all project depends on
dnnl project.
2023-01-03 11:24:06 +08:00
cloudhan
613920d6c5
Remove all usages of hipLaunchKernelGGL (#14089)
All thoes macro syntaxes are mistake by
https://github.com/ROCm-Developer-Tools/HIP-CPU/issues/8#issuecomment-756188453,
they should be corrected in documentation but is not. We moved away
hipThreadIdx_* in some previous commits, now we move away from
hipLaunchKernelGGL.
2023-01-02 12:55:44 +08:00
Tianlei Wu
6a9dc6c993
[CUDA] Update fused MHA to support flash attention and causal mask (#13953)
### Description
Update fused attention kernels to support flash attention and causal
mask (GPT-2 initial decoder run).

Note: Causal kernels are from FasterTransformer 5.2. Flash attention
kernels that is not causal are from TensorRT 8.5.1.

#### Performance Test of bert-base model

Test like the following:
```
 python -m onnxruntime.transformers.benchmark -m bert-base-cased -b 1 4 8 16 32 64 -s 512 -t 1000 -o by_script -g -p fp16 -i 3 --use_mask_index
```

Original Flash Attention is from
https://github.com/HazyResearch/flash-attention. RemovePadding and
RestorePadding is added before/after the original flash attention but
not for this PR, so the result is not apple-to-apple comparison. It is
added for reference only.

Average latency (ms) of float16 bert-base-cased model:

* A100

Kernel  | b1_s512 | b4_s512 | b8_s512 | b16_s512 | b32_s512 | b64_s512 |
b128_s512
-- | -- | -- | -- | -- | -- | -- | --
Unfused | 1.83 | 5.00 | 9.31 | 17.76 | 34.47 | 67.43 | 133.38
TRT Fused | 2.05 | 3.58 | 5.70 | 10.96 | 21.22 | 41.23 | 80.56
Flash Attention (from FT) | 1.43 | 3.20 | 5.71 | 10.95 | 22.19 | 42.96 |
84.54
Flash Attention (from TRT) | 1.44 | 3.28 | 5.70 | 10.86 | 21.00 | 40.56
| 79.53
Original Flash Attention | 1.81 | 4.04 | 6.82 | 13.06 | 24.62 | 46.58 |
91.10

* T4

  | b1_s512 | b4_s512 | b8_s512 | b16_s512 | b32_s512 | b64_s512
-- | -- | -- | -- | -- | -- | --
Unfused | 8.17 | 29.86 | 59.56 | 115.77 | 236.66 | 461.43
Flash Attention (from FT) | 5.65 | 21.12 | 44.94 | 86.83 | 174.16 |
351.38
Flash Attention (from TRT) | 5.73| 21.49| 45.49 | 89.15 | 174.37 |
352.08
Original Flash Attention | 6.22 | 22.16 | 43.39 | 83.8 | 168.77 | 337.04

* V100

Kernel | b1_s512 | b4_512 | b8_s512 | b16_s512 | b32_s512 | b64_s512
-- | -- | -- | -- | -- | -- | --
Unfused | 3.77 | 10.48 | 19.53 | 37.63 | 73.68 | 145.58
Flash Attention (from FT) | 3.21 | 8.25 | 14.95 | 28.83 | 56.28 | 111.15

#### Performance Test of GPT-2 model
Test like the following:
`
python benchmark_gpt2.py -m distilgpt2 -o --stage 1 --use_gpu -p fp16 -b
1 4 8 16 32 64 128 -s 0 --sequence_lengths 8 16 32 64 128 256 512
`
* A100

Note that flash attention is used as fused attention when
sequence_length > 128.

batch_size | sequence_length | with Fused Attention | without Fused
Attention | A100 Gain
-- | -- | -- | -- | --
1 | 8 | 0.93 | 1 | 7.0%
4 | 8 | 0.82 | 0.88 | 6.8%
8 | 8 | 0.84 | 0.88 | 4.5%
16 | 8 | 0.92 | 0.97 | 5.2%
32 | 8 | 1.15 | 1.17 | 1.7%
64 | 8 | 1.68 | 1.72 | 2.3%
128 | 8 | 2.76 | 2.78 | 0.7%
1 | 16 | 0.95 | 0.95 | 0.0%
4 | 16 | 0.83 | 0.88 | 5.7%
8 | 16 | 0.91 | 0.97 | 6.2%
16 | 16 | 1.12 | 1.17 | 4.3%
32 | 16 | 1.67 | 1.72 | 2.9%
64 | 16 | 2.73 | 2.76 | 1.1%
128 | 16 | 4.96 | 4.95 | -0.2%
1 | 32 | 0.94 | 0.88 | -6.8%
4 | 32 | 0.91 | 0.97 | 6.2%
8 | 32 | 1.12 | 1.17 | 4.3%
16 | 32 | 1.65 | 1.71 | 3.5%
32 | 32 | 2.69 | 2.76 | 2.5%
64 | 32 | 4.86 | 4.94 | 1.6%
128 | 32 | 9.35 | 9.38 | 0.3%
1 | 64 | 0.84 | 0.88 | 4.5%
4 | 64 | 1.1 | 1.17 | 6.0%
8 | 64 | 1.64 | 1.73 | 5.2%
16 | 64 | 2.66 | 2.77 | 4.0%
32 | 64 | 4.82 | 4.97 | 3.0%
64 | 64 | 9.23 | 9.4 | 1.8%
128 | 64 | 18.54 | 19.12 | 3.0%
1 | 128 | 0.91 | 0.98 | 7.1%
4 | 128 | 1.68 | 1.74 | 3.4%
8 | 128 | 2.71 | 2.83 | 4.2%
16 | 128 | 4.85 | 5.09 | 4.7%
32 | 128 | 9.32 | 9.69 | 3.8%
64 | 128 | 18.54 | 19.44 | 4.6%
128 | 128 | 36.86 | 38.47 | 4.2%
1 | 256 | 1.15 | 1.23 | 6.5%
4 | 256 | 2.71 | 2.95 | 8.1%
8 | 256 | 4.87 | 5.3 | 8.1%
16 | 256 | 9.32 | 10.23 | 8.9%
32 | 256 | 18.6 | 20.53 | 9.4%
64 | 256 | 36.93 | 40.41 | 8.6%
128 | 256 | 72.84 | 80.14 | 9.1%
1 | 512 | 1.68 | 1.96 | 14.3%
4 | 512 | 4.9 | 6.02 | 18.6%
8 | 512 | 9.4 | 11.59 | 18.9%
16 | 512 | 18.71 | 23.05 | 18.8%
32 | 512 | 37.13 | 45.46 | 18.3%
64 | 512 | 74.04 | 89.88 | 17.6%
128 | 512 | NA | NA | NA

* T4:

batch_size | sequence_length | with Fused Attention | with Unfused
Attention | T4 Gain
-- | -- | -- | -- | --
1 | 8 | 1.97 | 2.11 | 6.6%
4 | 8 | 2.2 | 2.25 | 2.2%
8 | 8 | 2.77 | 3.1 | 10.6%
16 | 8 | 4.17 | 4.2 | 0.7%
32 | 8 | 6.86 | 6.82 | -0.6%
64 | 8 | 14.88 | 14.92 | 0.3%
128 | 8 | 31.4 | 31.29 | -0.4%
1 | 16 | 1.61 | 1.71 | 5.8%
4 | 16 | 2.13 | 2.31 | 7.8%
8 | 16 | 3.38 | 3.67 | 7.9%
16 | 16 | 6.16 | 6.54 | 5.8%
32 | 16 | 14.16 | 14.76 | 4.1%
64 | 16 | 30.36 | 30.57 | 0.7%
128 | 16 | 63.14 | 63.57 | 0.7%
1 | 32 | 1.53 | 1.69 | 9.5%
4 | 32 | 3.34 | 3.66 | 8.7%
8 | 32 | 6.25 | 6.64 | 5.9%
16 | 32 | 14.12 | 14.9 | 5.2%
32 | 32 | 28.96 | 29.82 | 2.9%
64 | 32 | 61.07 | 61.77 | 1.1%
128 | 32 | 116.38 | 117.98 | 1.4%
1 | 64 | 2.01 | 2.21 | 9.0%
4 | 64 | 6.18 | 6.67 | 7.3%
8 | 64 | 13.72 | 14.49 | 5.3%
16 | 64 | 28.71 | 29.83 | 3.8%
32 | 64 | 58.65 | 60.68 | 3.3%
64 | 64 | 113.09 | 113.17 | 0.1%
128 | 64 | 205.21 | 209.4 | 2.0%
1 | 128 | 3.37 | 3.76 | 10.4%
4 | 128 | 13.54 | 14.85 | 8.8%
8 | 128 | 28.32 | 30.22 | 6.3%
16 | 128 | 58.16 | 62.09 | 6.3%
32 | 128 | 109.17 | 113.99 | 4.2%
64 | 128 | 198.9 | 207.1 | 4.0%
128 | 128 | 413.25 | 421.82 | 2.0%
1 | 256 | 6.33 | 7.05 | 10.2%
4 | 256 | 28.09 | 31.49 | 10.8%
8 | 256 | 57.47 | 62.76 | 8.4%
16 | 256 | 106.77 | 117.95 | 9.5%
32 | 256 | 197.02 | 208.58 | 5.5%
64 | 256 | 406.81 | 431.36 | 5.7%
128 | 256 | NA | NA | NA
1 | 512 | 13.84 | 16.32 | 15.2%
4 | 512 | NA | NA | NA
8 | 512 | NA | NA | NA
16 | 512 | NA | NA | NA
32 | 512 | NA | NA | NA
64 | 512 | NA | NA | NA
128 | 512 | NA | NA | NA

* V100:

batch_size | sequence_length | with Fused Attention | with Unfused
Attention | V100 Gain
-- | -- | -- | -- | --
1 | 8 | 1.31 | 1.6 | 18.1%
4 | 8 | 1.17 | 1.26 | 7.1%
8 | 8 | 1.43 | 1.79 | 20.1%
16 | 8 | 2.14 | 1.96 | -9.2%
32 | 8 | 2.91 | 3.08 | 5.5%
64 | 8 | 5.32 | 5.27 | -0.9%
128 | 8 | 9.34 | 8.97 | -4.1%
1 | 16 | 1.41 | 1.58 | 10.8%
4 | 16 | 1.38 | 1.49 | 7.4%
8 | 16 | 1.81 | 2.2 | 17.7%
16 | 16 | 2.8 | 2.83 | 1.1%
32 | 16 | 4.94 | 4.99 | 1.0%
64 | 16 | 8.88 | 8.84 | -0.5%
128 | 16 | 17.35 | 17.2 | -0.9%
1 | 32 | 1.38 | 1.77 | 22.0%
4 | 32 | 1.77 | 1.93 | 8.3%
8 | 32 | 2.71 | 2.86 | 5.2%
16 | 32 | 5.03 | 4.92 | -2.2%
32 | 32 | 8.8 | 8.79 | -0.1%
64 | 32 | 17.29 | 17.23 | -0.3%
128 | 32 | 33.27 | 33.1 | -0.5%
1 | 64 | 1.67 | 1.87 | 10.7%
4 | 64 | 2.69 | 2.76 | 2.5%
8 | 64 | 4.87 | 4.94 | 1.4%
16 | 64 | 8.73 | 8.81 | 0.9%
32 | 64 | 16.92 | 17.24 | 1.9%
64 | 64 | 33 | 33.38 | 1.1%
128 | 64 | 65.33 | 65.86 | 0.8%
1 | 128 | 2.03 | 2.22 | 8.6%
4 | 128 | 4.9 | 5.04 | 2.8%
8 | 128 | 8.76 | 8.81 | 0.6%
16 | 128 | 17.06 | 17.29 | 1.3%
32 | 128 | 33.25 | 33.56 | 0.9%
64 | 128 | 65.54 | 66.5 | 1.4%
128 | 128 | 130.44 | 131.44 | 0.8%
1 | 256 | 2.78 | 2.86 | 2.8%
4 | 256 | 8.75 | 9.04 | 3.2%
8 | 256 | 17 | 17.68 | 3.8%
16 | 256 | 33.19 | 34.32 | 3.3%
32 | 256 | 65.43 | 67.86 | 3.6%
64 | 256 | 129.92 | 134.68 | 3.5%
128 | 256 | NA | NA | NA
1 | 512 | 4.95 | 5.32 | 7.0%
4 | 512 | NA | NA | NA
8 | 512 | NA | NA | NA
16 | 512 | NA | NA | NA
32 | 512 | NA | NA | NA
64 | 512 | NA | NA | NA
128 | 512 | NA | NA | NA
2022-12-31 10:33:54 -08:00
Dmitri Smirnov
5d729839b5
Support loading widechar paths on windows (#14066)
### Description
Make GetRuntimePath() and LoadDynamicLibrary() operate on platform
specific paths

### Motivation and Context
This addresses https://github.com/microsoft/onnxruntime/issues/14063
2022-12-30 16:30:11 -08:00
Baiju Meswani
b85878953f
Fix nightly ort training ci pipeline (#14007) 2022-12-30 12:28:57 -08:00
Tang, Cheng
9f52a8bc55
fix the cuda EP test failure (#14087)
### Description
Fix a regression failure for cuda EP test



### Motivation and Context
CudaEP test is a special test case under EP folder, not in test folder.
when refactor the code during multi-stream work, we missed it. This PR
is to fix the test.

Co-authored-by: Cheng Tang <chenta@microsoft.com@orttrainingdev9.d32nl1ml4oruzj4qz3bqlggovf.px.internal.cloudapp.net>
2022-12-28 17:04:27 -08:00
Chen Fu
a79d88ed7f
Fix prefast scan bug 8995 (#14077)
### Description
Prefast warning fix


### Motivation and Context
https://dev.azure.com/aiinfra/ONNX%20Runtime/_workitems/edit/8995/

Co-authored-by: Chen Fu <fuchen@microsoft.com>
2022-12-28 10:38:36 -08:00
Vincent Wang
0c3480e565
[ORTModule] ATen upsample_nearest Gradient Bugfix (#14069)
PyTorch removed upsample_nearest related backward functions with "vec"
overload name since 1.13. The functions without overload name are
available for all versions, though they are not that convienent to use.
This PR changes the gradient builder code to use functions without
overload name for ATen upsample_nearest nodes.

This PR also fixed a bug for ORTModule's corner case introduced by the
multi-stream PR. There is some code to execute the barrier step for
triggered downsteam is the barrier is out of range. But this should be
applied to triggered downstream only. If it's a normal run with start
step as a barrier step but out of range, we should not apply the logic.
For example, for ORTModule, if the barrier is the 1st step of whole CPU
plan, and the forward part is empty, then the forward normal run will
run step from start-0 to end-0 (actually nothing), and step-0 is the
barrier, then we should not execute the barrier in such case.
2022-12-27 10:18:30 +08:00
PeixuanZuo
770fb9649b
[Fix] CPU memory type need device_id=0 to get allocator (#14050)
### Description
<!-- Describe your changes. -->
Fix an error: 
`onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException:
[ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Exception during
initialization:
/onnxruntime_src/onnxruntime/core/framework/allocation_planner.cc:819
onnxruntime::common::Status
onnxruntime::PlannerImpl::ComputeValueLocation() allocator was false.`

This error happens when we run huggingface models with DDP on
multi-GPUs. In a thread with rank>0, it will attempt to obtain a CPU
memory allocator with device_id>0, which causes the error. There is a
workaround judges whether node’s output is on the CPU or not. If the
output is on CPU, we set device_id = 0.

Co-authored-by: peixuanzuo <peixuanzuo@linmif39a000004.zvflicr54joexhdgnhvmxrxygg.phxx.internal.cloudapp.net>
2022-12-26 13:58:01 +08:00
Dmitri Smirnov
d762aa2a4c
Let Cmake decide where to place abseil (#14057)
### Description
Remove Abseil module placement specifications


### Motivation and Context
Allow Cmake defaults take place and possible redirection of all
submodules for sharing between the local builds.
2022-12-23 12:08:13 -08:00
Adrian Lizarraga
3bbcc2799f
Support for custom op variadic inputs/outputs (#13946)
### Description
Adds support for variadic inputs and outputs to custom operators.

### Motivation and Context
Needed for custom ops that wrap external runtimes/models and maybe TensorRT plugins.
2022-12-23 11:41:15 -08:00
Tianlei Wu
8ac264b896
Deprecate one step beam search (#14046)
### Description
Deprecate one step beam search since it lacks maintenance (some tests
failed) and its performance is not optimal.

For users who still need this feature, please use older version
(<=1.13.1) of onnxruntime to export one step beam search model, and the
model can run in latest onnxruntime.

It is recommend to use
[convert_generation.py](https://github.com/microsoft/onnxruntime/blob/main/onnxruntime/python/tools/transformers/convert_generation.py)
to generate beam search onnx model for better performance.
2022-12-22 23:14:31 -08:00
Adam Louly
e49f358686
expose lr scheduler python bindings for on device training. (#13882)
### Description
Exposing LR Scheduler python bindings for on device training.

Co-authored-by: Baiju Meswani <bmeswani@microsoft.com>
2022-12-22 18:44:04 -08:00
Ye Wang
b999022b03
disable some test input for generation model temporarily (#14045)
### Description
<!-- Describe your changes. -->

newly added test cases break the parity check. disable them temporarily
during investigation.

### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->

Co-authored-by: Ubuntu <wy@v100-2.0cdb2e52twzevn1i4fi45bylyg.jx.internal.cloudapp.net>
2022-12-22 17:34:24 -08:00
Ye Wang
68518a1b72
Sampling op (#13426)
### Description
<!-- Describe your changes. -->

Sampling op for cpu and cuda
support huggingface case and custom case
            


### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->

Co-authored-by: Ubuntu <wy@v100-2.0cdb2e52twzevn1i4fi45bylyg.jx.internal.cloudapp.net>
2022-12-22 17:34:12 -08:00
fxmarty
4d2dc8bbbd
Replace all numpy.bool by python builtin bool (#14014)
`numpy.bool` has been removed as from 1.24.0.

It was before an alias for python's `bool`.

Fixes https://github.com/huggingface/optimum/issues/610

### Motivation and Context

Numpy 1.24.0 breaks for example IO binding helpers.
2022-12-23 09:27:23 +10:00
Baiju Meswani
1b58331fb3
[QAT] Graph transformer to fuse QDQ pattern into FakeQuant (#13777)
To perform QAT in onnxruntime, `FakeQuant` op was introduced in #13649.

The onnxruntime quantization tool generates a post training static
quantization onnx model with `QuantizeLinear`->`DequantizeLinear` nodes.
To perform QAT, this pattern needs to be transformed to `FakeQuant`.

This pull request introduces a graph transformer that looks for the
`Q->DQ` pattern and fuses it to a `FakeQuant` node.
2022-12-22 09:44:39 -08:00
Tianlei Wu
944bff0ad6
Support two stages onnx GPT-2 conversion (#14025)
### Description

Add support of ONNX conversion of GPT-2 for two stages:
* Stage 1 is the initial stage that has empty past state. 
* Stage 2 has non-empty past state and sequence_length is 1.

Add a parameter --stage to specify such stage. For stage 1, we will
enable mask_index for Attention so that we can use fused attention in
CUDA.

Other changes:
(1) use int32 inputs as default (otherwise, there is error in inference)
(2) update gpt2_parity to include SkipLayerNormalization (see
https://github.com/microsoft/onnxruntime/pull/13988) and
EmbedLayerNormalization
(3) get all environment variables that might impact GPT-2 latency in
benchmark_gpt2

### Motivation and Context

To test fused attention for GPT-2 model for
https://github.com/microsoft/onnxruntime/pull/13953.
2022-12-22 09:33:01 -08:00
PeixuanZuo
694ba033e9
[ROCm] update skip_layernorm test sample (#14051)
### Description
<!-- Describe your changes. -->

Larger batch_size won't cover more implementations and may block CI,
remove batch_size 128.

Co-authored-by: peixuanzuo <peixuanzuo@linmif39a000004.zvflicr54joexhdgnhvmxrxygg.phxx.internal.cloudapp.net>
2022-12-22 21:18:10 +08:00
pengwa
2f5bf75e51
Optimize computation orders (#13672)
### Optimize computation orders

In `Roberta/Electra`, when `ClassificationHead` is used, there is
slicing operation on features on sequence_length dimensions, then loss
calculations only depend on this sliced data. This is a slicing at axis
1. Before slicing the shape is [batch, sequence_length, hidden], after
slicing, it becomes [batch , hidden_stage]

We had opportunities to bring this slicing earlier as much as possible,
by passing through simple elementwise ops (like Add/Div), or
Layernorm/Softmax(if their reduce axis is after the slicing axis), or
even MatMul's the left operand (if only it did not affect the last
dims).

For operators like Reshape/Transpose, it is special since they have
either data specified (after slicing we need update), or they have perm
specified, which requires the input rank remain unchanged. So for those
kinds of operators, we can remain the original rank, but just leave the
sliced dim to be 1, after the compute completed, we do a Squeeze.

```
class RobertaClassificationHead(nn.Module):
    """Head for sentence-level classification tasks."""

    def __init__(self, config):
        super().__init__()
        self.dense = nn.Linear(config.hidden_size, config.hidden_size)
        classifier_dropout = (
            config.classifier_dropout if config.classifier_dropout is not None else config.hidden_dropout_prob
        )
        self.dropout = nn.Dropout(classifier_dropout)
        self.out_proj = nn.Linear(config.hidden_size, config.num_labels)

    def forward(self, features, **kwargs):
        x = features[:, 0, :]  # take <s> token (equiv. to [CLS])
        x = self.dropout(x)
        x = self.dense(x)
        x = torch.tanh(x)
        x = self.dropout(x)
        x = self.out_proj(x)
        return x
```

src\transformers\models\roberta\modeling_roberta.py
src\transformers\models\electra\modeling_electra.py

#### Benchmark

A simple benchmark shows Robeta training latency dropped from 208ms ~
199ms. 4.5+% reduction.
More comprehensive tests are on the way.

### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
2022-12-22 15:12:52 +08:00
Hariharan Seshadri
7ed8bd4f95
Support (Bias)SkipLayerNormalization fusion in GPT2 (#13988) 2022-12-21 23:04:44 -08:00
Joseph Groenenboom
baba312e30
Add provider selection for gpt2/convert_to_onnx.py (#13982)
Allows the user to select from supported backends for gpt2/convert_to_onnx.py. Default behavior is preserved if no provider is selected. This allows the ROCm EP to be selected.
2022-12-22 11:41:09 +08:00
PeixuanZuo
a170e40fbb
[ROCm] Update Dockerfiles of ROCm and MIgraphX to ROCm5.4 (#14013)
Update Dockerfiles of ROCm and MIGraphX to ROCm5.4
Update README.md

Co-authored-by: peixuanzuo <peixuanzuo@linmif39a000004.zvflicr54joexhdgnhvmxrxygg.phxx.internal.cloudapp.net>
2022-12-22 10:03:34 +08:00
PeixuanZuo
b5fd2a6a80
[ROCm] Add ROCm5.4 to python package pipeline (#14012)
Add ROCm5.4 to python package pipeline.
The download link of ROCm5.4 nightly build whl is
https://download.onnxruntime.ai/onnxruntime_nightly_rocm54.html
The download linkd of ROCm5.4 nightly build whl with profiling is
https://download.onnxruntime.ai/onnxruntime_nightly_rocm54.profiling.html

Co-authored-by: peixuanzuo <peixuanzuo@linmif39a000004.zvflicr54joexhdgnhvmxrxygg.phxx.internal.cloudapp.net>
2022-12-22 10:01:40 +08:00
PeixuanZuo
ab2dd8dfaf
[ROCm] Update ROCm and MigraphX CI to ROCm5.4 (#14011)
Update ROCm and MigraphX CI to ROCm5.4
Run ortmodule_test with ROCm5.4 and all
passed(https://dev.azure.com/onnxruntime/onnxruntime/_build/results?buildId=824742&view=logs&j=8292f886-7946-5da9-7977-04484c342eda&t=5de68eaa-cbdc-5be5-13d0-bb946f4ddb2d).

Co-authored-by: peixuanzuo <peixuanzuo@linmif39a000004.zvflicr54joexhdgnhvmxrxygg.phxx.internal.cloudapp.net>
2022-12-22 10:01:05 +08:00
Edward Chen
df8ff34f25
Update CUDA ArgMin/ArgMax op kernels to have end version 11 since opset 12+ is not supported yet. (#13983)
### Description

Update CUDA ArgMin/ArgMax op kernels to have end version 11 since opset
12+ is not supported yet.
With the way these kernels are currently registered, the documentation
shows support for opset 11+. This is not accurate.

### Motivation and Context

Fix #13781
2022-12-21 19:01:00 -05:00
Numfor Tiapo
8943d623a4
DML EP Register operators for Opset 16 (#14034)
This PR Registers the following operators for opset 16 to the DML EP:

- LeakyRelu-16
- PRelu-16
- Where-16
- GreaterOrEqual-16
- LessOrEqual-16

Identity-16 was not added in this PR due to pipeline failures

Co-authored-by: Numfor Mbiziwo-Tiapo <numform@microsoft.com>
2022-12-21 09:05:12 -08:00
JiCheng
1a177a1713
Cover beta in all Conv paths. (#14008)
### Description
<!-- Describe your changes. -->



### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
2022-12-21 09:02:48 -08:00