PIX Capture tool requires 'present' to end a frame capture. ORT doesn't
have rendering work so no 'present' happens.
To avoid endless waiting for PIX capture tool, this PR added a blank
surface and 'present' on it in each session run.
The surface is created in WebGPU ep constructor and closed in WebGPU ep
destructor.
### Description
<!-- Describe your changes. -->
### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
Validate the context_file_path before EP compile graphs to make it fail fast. To avoid the possibility that EP generate new file (context binary file or blob file) over write the existing file. Return error if the path points to folder.
The CPU walltime of waiting for PopErrorScope is non-trivial, and also
validation errors are not expected to happen in Release build.
### Description
<!-- Describe your changes. -->
### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
### Description
* Pass topk_scores to beam scorer in slow topk path.
* Add an env variable `ORT_BEAM_SEARCH_USE_FAST_TOPK` to enable/disable fast topk.
* Add a test case for slow topk path.
### Motivation and Context
This bug was introduced in
https://github.com/microsoft/onnxruntime/pull/16272
Beam search uses fast cuda kernel when number of beams <= 32. When beam
size is larger than that threshold, we use another code path (slower
cuda kernel) to get topk. In such `slow topk path`, topk_scores shall be
passed to beam scorer but it is not.
This bug will cause incorrect result when num_beams > 32. It was not
found previously since such large beam size is rarely used.
### Description
This change implements FlashAttention 2 for the webgpu EP for the MHA
operator.
Numbers from Alderlake device show a 2.2x speed up for prefill, which
considering that Attention is 50% of prefill phase (other 50% being
MatMul) implies 4x speed up for Attention with this implementation. This
is inline with the expected perf gain of 2-4x with FlashAttention over
regular attention.
```
Baseline
PS C:\onnxruntime> C:\model_benchmark\model_benchmark.exe -i C:\Phi-3.5-mini-instruct-onnx-web\Phi-3.5-mini-instruct-onnx-web\ -l 1000
Batch size: 1, prompt tokens: 1001, tokens to generate: 128
Prompt processing (time to first token):
avg (us): 9.54997e+06 <<<<<
avg (tokens/s): 104.817
p50 (us): 9.49218e+06
stddev (us): 251442
n: 5 * 1001 token(s)
------
With FlashAttention 2
PS C:\onnxruntime> C:\model_benchmark\model_benchmark.exe -i C:\Phi-3.5-mini-instruct-onnx-web\Phi-3.5-mini-instruct-onnx-web\ -l 1000
Batch size: 1, prompt tokens: 1001, tokens to generate: 128
Prompt processing (time to first token):
avg (us): 4.27937e+06 <<<<<
avg (tokens/s): 233.913
p50 (us): 4.27687e+06
stddev (us): 5344.1
n: 5 * 1001 token(s)
```
### Motivation and Context
On integrated GPUs memory bandwidth is premium, Flash attention makes
softmax computation (and therefore output attention vector computation)
a running operation instead of maintaining full QKt attention scores in
memory. As a result, we see significant improvements in prefill speed -
200% speed up measured here.
This change uses techniques from co-operative matrix multiply to use
registers from a subgroup for fast in register matrix multiply. Without
the co-operative matrix multiply technique ALD showed about 6.0s prefill
time.
Tested on ALD/TGL intel integrated and Nvidia 4070.
### Future Work
- Fine tuning and profiling optimizations.
- Current implement is for prefill only, a generation phase optimized
FA2 implementation is possible, however attention is a tiny part of the
generation phase.
### Description
These changes are done to ensure that weight sharing happens between two model using session context option ep_weight_sharing.
Key changes introduced in this feature are:
Creating a shared context between two models Extracting external constant initializers and re labelling them back as
inputs to the model to allow weight loading in the direct blob. Creating EP Context Nodes when Subgraph partitioning is happening.
### Motivation and Context
This change was required to ensure that LLM with prefill and kvcache models can use the same share
The change was also required to ensure EP Context nodes can be formed even when model is being subgraph partitioned.
---------
Co-authored-by: jatinwadhwa921 <jatin.wadhwa@intel.com>
Co-authored-by: jatinwadhwa921 <110383850+jatinwadhwa921@users.noreply.github.com>
Co-authored-by: saurabh <saurabh1.kale@intel.com>
Co-authored-by: TejalKhade28 <tejal.khade@intel.com>
Co-authored-by: sfatimar <sahar.fatima@intel.com>
Co-authored-by: Javier E. Martinez <javier.e.martinez@intel.com>
Co-authored-by: Preetha Veeramalai <preetha.veeramalai@intel.com>
Co-authored-by: Eric Crawford <eric.r.crawford@intel.com>
### Description
BeamSearchTest.DummyT5WithSequenceInputIds failed in Windows due to
early stopping triggered. The cause is state.early_stopping_ is
interpreted as true in cuda kernel at some point, however printf still
show its value is false. The root cause is unknown.
Update the code to use early_stopping as template parameter seems walk
around the issue.
Other changes:
* Add some debug code (will not be built into binary unless
DEBUG_GENERATION is fined) to assist debugging beam search scorer in
CUDA.
* Enable DummyT5WithSequenceInputIds test in CI. This test was not run
in Windows CUDA CI pipeline previously.
### Motivation and Context
Fix a unit test BeamSearchTest.DummyT5WithSequenceInputIds failure in
Windows.
### Description
<!-- Describe your changes. -->
This PR is a follow-up to
https://github.com/microsoft/onnxruntime/pull/23488 and partially
improves upon https://github.com/microsoft/onnxruntime/issues/23403. It
does the following:
- Prevents unnecessary cache shader recompilation for 'nearest' resize
operation.
- Fixes precision (offset-by-one) errors with asymmetric coordinate
transform. When running the Kokoro TTS model, values for the
`/decoder/decoder/generator/f0_upsamp/Resize_output_0` results in
differences at the end bounds due to precision issues when dividing
21600 by 72 (should be 300, but seemingly results in 299.999, which
causes issues when flooring)
### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
I did a deep dive over the weekend to try fix Kokoro TTS on WebGPU and
found that the above node had a large difference. Thinking this was a
major issue, I spent some time fixing it. Turns out, it only happens for
a small number of values, leading to high maximum error, but most values
are correct (as seen here).
BEFORE:
```
[/decoder/decoder/generator/f0_upsamp/Resize_output_0] atol: 78.6640682220459 | rtol: 24.13991587587724 | avgDiff: 0.009967932171121087 | medianDiff: 0.000030517578125
```
AFTER:
```
[/decoder/decoder/generator/f0_upsamp/Resize_output_0] atol: 0.0011138916015625 | rtol: 0.0020059924232260704 | avgDiff: 0.00008570214675873825 | medianDiff: 0.000030517578125
```
So, although it has a very small impact on the final output (waveform),
this bug could appear with other models in a more severe way.
BEFORE:
```
[waveform] atol: 0.04784199967980385 | rtol: 1366.0462001093495 | avgDiff: 0.0009544936942737713 | medianDiff: 0.00015346752479672432
```
AFTER:
```
[waveform] atol: 0.04775865003466606 | rtol: 1354.7002460360852 | avgDiff: 0.000954830244055033 | medianDiff: 0.00015274062752723694
```
### Description
<!-- Describe your changes. -->
### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
### Description
The quantization calibrators have `execution_providers` attributes but
there is no way for a user to provide their own providers when using the
`quantize` or `quantize_static` functions. This PR adds a
`calibration_providers` parameter to allow users to specify the
execution providers to use during calibration. It is helpful when
quantizing large models which are slow to calibrate on the CPU.
- Chose `calibration_providers` as the name since there is the
docstrings refer to another `execution_provider`
169917b1e7/onnxruntime/python/tools/quantization/quantize.py (L204)169917b1e7/onnxruntime/python/tools/quantization/quantize.py (L415)
which are not present anywhere in the code.
- Can change the name to something else if needed like
calibrator_providers, and/or make it into a string instead of a
providers list.
Add session option to enable user loading model with external data from memory buffer. User want to set the folder path for the external data files.
### Description
For some cases user load the model from memory buffer, but they can't load the external files into memory. They need to have a way to set the folder path for the external data files so that Ort can figure out the external data location.
### Description
Convert output_padding attribute from 1D to 2D convtranspose
### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
https://github.com/microsoft/onnxruntime/issues/23403
Remove the auto-generated cgmanifest.json. Because now we can get the
same information from vcpkg.
Also, remote some outdated entries in the main cgmanifest.json file.
### Description
1. Enable VCPKG flag in Windows CPU CI build pipelines.
2. Increased the min supported cmake version from 3.26 to 3.28. Because
of it, drop the support for the old way of finding python by
"find_package(PythonLibs)". Therefore, in build.py we no longer set
"PYTHON_EXECUTABLE" cmake var when doing cmake configure.
3. Added "xnnpack-ep" as a feature for ORT's vcpkg config.
4. Added asset cache support for ORT's vcpkg build
5. Added VCPKG triplet files for Android build.
6. Set VCPKG triplet to "universal2-osx" if CMAKE_OSX_ARCHITECTURES was
found in cmake extra defines.
7. Removed a small piece of code in build.py, which was for support CUDA
version < 11.8.
8. Fixed an issue that CMAKE_OSX_ARCHITECTURES sometimes got specified
twice when build.py invoked cmake.
9. Added more model tests to Android build. After this change, we will
test all ONNX versions instead of just the latest one.
10. Fixed issues that are related to build.py's "--build_nuget"
parameter. Also, enable the flag in most Windows CPU CI build jobs.
11. Removed a restriction in build.py that disallowed cross-compiling
Windows ARM64 nuget package on Windows x86.
### Motivation and Context
Adopt vcpkg.
Bumps [lintrunner](https://github.com/suo/lintrunner) from 0.12.5 to
0.12.7.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/suo/lintrunner/blob/main/CHANGELOG.md">lintrunner's
changelog</a>.</em></p>
<blockquote>
<h2>[0.12.7] - 2024-12-05</h2>
<h3>Bug Fixes</h3>
<ul>
<li>Build x86_64 wheels for Windows (<a
href="a4d6b74693">a4d6b74</a>)</li>
<li>Fix <a href="https://doc.rust-lang.org/clippy/">Clippy</a>
violatoins (<a
href="05ff6431bb">05ff643</a>)</li>
<li>Fetch all commit history to fix MacOS builds (<a
href="3770be65ee">3770be6</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="1b70da01a6"><code>1b70da0</code></a>
chore(release): prep for 0.12.7</li>
<li><a
href="3770be65ee"><code>3770be6</code></a>
[CI] Fetch full commit history (<a
href="https://redirect.github.com/suo/lintrunner/issues/81">#81</a>)</li>
<li><a
href="b2482aff48"><code>b2482af</code></a>
[CI] Use <code>actions/checkout@v4</code> (<a
href="https://redirect.github.com/suo/lintrunner/issues/80">#80</a>)</li>
<li><a
href="05ff6431bb"><code>05ff643</code></a>
Fix clippy violations (<a
href="https://redirect.github.com/suo/lintrunner/issues/79">#79</a>)</li>
<li><a
href="1be20c6b8f"><code>1be20c6</code></a>
chore(release): prep for 0.12.6</li>
<li><a
href="a4d6b74693"><code>a4d6b74</code></a>
fix(build): build x86_64 wheels for Windows (<a
href="https://redirect.github.com/suo/lintrunner/issues/73">#73</a>)</li>
<li>See full diff in <a
href="https://github.com/suo/lintrunner/compare/v0.12.5...v0.12.7">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
### Description
The Node JS Samples included in the repository have outdated package
references that are broken, which are fixed in this PR.
### Motivation and Context
The samples included in this repository should just work, but sadly do
not. The reason is that they are using very outdated references for the
npm modules. This fix updates the dependencies to the current
onnxruntime-node, which fixes the samples. Also adds a small update to
the .gitignore to exclude the node_modules directories in the samples
directory, which keeps the local repo changelist cleaner.
### Description
Remove MSVC warnings 4244, 4267 from the list of disabled warnings in
cmake.
Fix the code that generates the warnings so that it no longer does.
### Motivation and Context
This makes onnxruntime_providers_openvino.dll pass BinSkim analysis.
Without this change BinSkim complains about the disabled warnings.
## Description
Follow-up for #23383 and #23474
* Adds android BrowserStack test back in
* Modifies MAUI csproj file to build into an APK
### Motivation and Context
There were 2 issues with the previous PRs:
1. The updated MAUI .csproj file configuration failed when building to
iOS and MacCatalyst. This caused problems in the packaging pipeline
because we build all C# projects in the .soln file in the packaging
pipeline. Removed the Mac & iOS build targets for now
3. The previous MAUI .csproj file configuration did not build into an
APK. It was missing the `<OutputType>` XAML tag and the Android package
type XAML tag.
### Description
(1) Remove `if (CMAKE_CUDA_COMPILER_VERSION VERSION_GREATER_EQUAL 11)`
since build requires cuda >= 11.4.
(2) Add sm_86 and sm_89 since we generate SASS code for specified cuda
architectures only. This change could support popular consumer GPUs
(like RTX 30X0 and RTX 40X0).
(3) Add sm_120 to support Blackwell GPUs (like RTX 50X0 etc).
(4) Add `-Xfatbin=-compress-all` to reduce wheel size. When
CMAKE_CUDA_ARCHITECTURES is not specified, the linux wheel size built by
CUDA 12.8 is reduced 8% (from 324MB to 299MB).
### Motivation and Context
To support popular consumer GPUs (RTX 30x0, 40x0, 50x0) in the default
setting. Reduce binary size.
Note that the default sm settings does not impact official released
binary. ORT official released binary are built with augmentation like
CMAKE_CUDA_ARCHITECTURES=75;80;90, which has both SASS (real) and PTX
(virtual) by default. See
https://cmake.org/cmake/help/latest/prop_tgt/CUDA_ARCHITECTURES.html for
more info.
### Description
Makes the QNN provider option `offload_graph_io_quantization` enabled by
default. It was previously disabled by default.
### Motivation and Context
Enabling this option significantly decreases inference latency for many
models.
### Description
* Update rocm to 6.3.2;
* Remove dependency on cupy (which does not support rocm 6.3 yet).
### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
…-andriod-e2e-test-job.yml
### Description
<!-- Describe your changes. -->
### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
### Description
<!-- Describe your changes. -->
* Update env to cuda 12.6/ubuntu 22.04 (ubuntu 20.04 uses outdated py38
by default)
* Clean old trt8.6 test config
### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
### Description
Attempt to make it more consistent.
### Motivation and Context
Customer reports big difference in perf of Round between Windows and
Linux.
### Description
- Add symbolic shape inference dispatcher for `ReduceMean`.
- Reducemean is used in RMSNorm so shape inference fails for llama, phi,
etc torch exported models.
- Reuse the dispatcher for ReduceSum since ReduceMean 18+ and ReduceSum
13+ have the same specs other than the type of reduction done.
- Fix an issue with `quant_pre_process` tool where the external data
file is missing if `skip_symbolic_shape=True` and
`skip_optimization=False`.
- Add `"session.optimized_model_external_initializers_file_name"` to
session options so that the external data gets saved in the same temp
directory as the optimized model.
### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
### Description
<!-- Describe your changes. -->
This PR changes the range of ONNX versions supported by CANN graph
inference to no upper limit (the previous version supports between 8 and
15), because the CANN version is further upgraded to support some
developers' requirements for higher ONNX versions.
### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
cpuinfo outputs error when cpu is not recognized.
this has been a longstanding issue e.g.
https://github.com/microsoft/onnxruntime/issues/21947https://github.com/microsoft/onnxruntime/issues/21393
this issue has been exacerbated by
https://github.com/microsoft/onnxruntime/pull/22856
this change
4fa0f1e0ed/onnxruntime/core/mlas/lib/qnbitgemm_kernel_neon.cpp (L189)
causes the messages to appear during static initialization.
this means for python, when you import onnxruntime you immediately see
the errors.
```
>>> import onnxruntime
Error in cpuinfo: Unknown chip model name 'snapdragon (tm) 8cx gen 3 @ 3.40 GHz'.
Please add new Windows on Arm SoC/chip support to arm/windows/init.c!
unknown Qualcomm CPU part 0x1 ignored
unknown Qualcomm CPU part 0x1 ignored
unknown Qualcomm CPU part 0x1 ignored
unknown Qualcomm CPU part 0x1 ignored
unknown Qualcomm CPU part 0x1 ignored
unknown Qualcomm CPU part 0x1 ignored
unknown Qualcomm CPU part 0x1 ignored
unknown Qualcomm CPU part 0x1 ignored
unknown Qualcomm CPU part 0x1 ignored
unknown Qualcomm CPU part 0x1 ignored
unknown Qualcomm CPU part 0x1 ignored
unknown Qualcomm CPU part 0x1 ignored
```
Fix is to patch pytorch_cpuinfo and to comment out std::cerr lines in
cpuid_uarch.cc
the errors are not actionable by the user, so they should not be
emitted.
tested that after these changes, these errors no longer show up.
In this change
1. Vectorization of k is updated to 4.
2. Tile_A, Tile_B are stored transposed in shared memory. This makes it
so that memory locality is improved for our access pattern.
3. Lane output is switched to being individual vectors and its loop
unrolled, this solves the problem where laneoutput was not on registers
before.
Perf improvements are not very consistent with this change. On Tigerlake
GPU with 32.0.101.6460 (latest intel drivers)
```
Baseline
model_benchmark.exe -i C:\Phi-3.5-mini-instruct-onnx-web\Phi-3.5-mini-instruct-onnx-web\ -l 1000
Batch size: 1, prompt tokens: 1001, tokens to generate: 128
Prompt processing (time to first token):
avg (us): 7.36557e+06 <<<<
avg (tokens/s): 135.903
p50 (us): 7.35498e+06
stddev (us): 27599
n: 5 * 1001 token(s)
With Change
model_benchmark.exe -i C:\Phi-3.5-mini-instruct-onnx-web\Phi-3.5-mini-instruct-onnx-web\ -l 1000
Batch size: 1, prompt tokens: 1001, tokens to generate: 128
Prompt processing (time to first token):
avg (us): 6.52302e+06 <<<<
avg (tokens/s): 153.457
p50 (us): 6.52224e+06
stddev (us): 10407.3
n: 5 * 1001 token(s)
```
However, using the Intel GPA comparing before and after profile, one can
clearly see straight runs of ALU work without being interspersed by
writebacks to local memory that contained lane_output before.

There is a crash in the WebGPU CI pipeline. It crashed at process
shutdown when unloading onnxruntime_pybind11_state.pyd.
Here is the callstack:
```
dxil.dll!DxcSwapThreadMalloc() Unknown
dxil.dll!DxcThreadMalloc::DxcThreadMalloc(struct IMalloc *) Unknown
dxil.dll!DxcValidator::Release(void) Unknown
[Inline Frame] webgpu_dawn.dll!Microsoft::WRL::ComPtr<IDxcValidator>::InternalRelease() Line 235 C++
[Inline Frame] webgpu_dawn.dll!Microsoft::WRL::ComPtr<IDxcValidator>::{dtor}() Line 290 C++
webgpu_dawn.dll!dawn::native::d3d12::Backend::`scalar deleting destructor'(unsigned int) C++
webgpu_dawn.dll!`eh vector destructor iterator'(void * ptr, unsigned __int64 size, unsigned __int64 count, void(*)(void *) destructor) C++
webgpu_dawn.dll!dawn::native::InstanceBase::~InstanceBase() Line 197 C++
webgpu_dawn.dll!dawn::native::InstanceBase::`scalar deleting destructor'(unsigned int) C++
webgpu_dawn.dll!dawn::native::InstanceBase::DeleteThis() Line 218 C++
ucrtbase.dll!<lambda>(void)() Unknown
ucrtbase.dll!__crt_seh_guarded_call<int>::operator()<<lambda_7777bce6b2f8c936911f934f8298dc43>,<lambda>(void) &,<lambda_3883c3dff614d5e0c5f61bb1ac94921c>>() Unknown
ucrtbase.dll!_execute_onexit_table() Unknown
onnxruntime_pybind11_state.pyd!dllmain_crt_process_detach(const bool is_terminating) Line 182 C++
> onnxruntime_pybind11_state.pyd!dllmain_dispatch(HINSTANCE__ * const instance, const unsigned long reason, void * const reserved) Line 293 C++
ntdll.dll!LdrpCallInitRoutine() Unknown
ntdll.dll!LdrShutdownProcess() Unknown
ntdll.dll!RtlExitUserProcess() Unknown
kernel32.dll!ExitProcessImplementation() Unknown
ucrtbase.dll!exit_or_terminate_process() Unknown
ucrtbase.dll!common_exit() Unknown
python312.dll!00007ff9cab3ec8d() Unknown
python312.dll!00007ff9cab3efbf() Unknown
python312.dll!00007ff9cab3edee() Unknown
python312.dll!00007ff9cab57f4c() Unknown
python312.dll!00007ff9cab57579() Unknown
python312.dll!00007ff9cab573be() Unknown
python312.dll!00007ff9cab5729b() Unknown
python312.dll!00007ff9cabacfcb() Unknown
python312.dll!00007ff9cabacd7d() Unknown
python312.dll!00007ff9cab99e2d() Unknown
python.exe!00007ff78a641230() Unknown
kernel32.dll!BaseThreadInitThunk() Unknown
ntdll.dll!RtlUserThreadStart() Unknown
```
It might be because the destruct order of some global variables was
wrong. I saw DX DLLs were getting destroyed earlier than the WebGPU
instance in our code in onnxruntime_pybind11_state.pyd.
### Description
(1) Update BiasGelu fusion to support onnx Gelu-20
Since onnx Gelu-20 supports float/double/bf16/fp16, here we update
related ops to support these data types in CUDA and ROCm execution
providers:
(2) Add double support for Gelu/FastGelu op in CUDA/ROCm execution
provider
(3) Add BFloat16 support for Gelu ops in CUDA execution provider
(4) Add unit tests
(5) Update operator documents
### Motivation and Context
https://github.com/microsoft/onnxruntime/issues/23491
### Description
Add details about how to access the BrowserStack logs
### Motivation and Context
- browserstack link on its own is confusing to people who don't have
context.
Let me know if you have suggestions to make the text more clear or
informative
NDK has two toolchain cmake files as you can see in
https://android.googlesource.com/platform/ndk/+/refs/heads/main/build/cmake
By default NDK use the legacy one for providing the best compatibility.
We don't need to. This PR changes to use the new one.
The new toolchain cmake file uses standard cmake flags like
CMAKE_ANDROID_RTTI to control C++ features.
### Description
<!-- Describe your changes. -->
This PR will enable python dlpack interface by default.
### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
dlpack python interface is useful in inference mode not only training
mode.
Since some inference result preprocess may be written in torch and
making unnecessary device transfer should be reduced in those cases.
closes https://github.com/microsoft/onnxruntime/issues/15963 closes
https://github.com/microsoft/onnxruntime/issues/22061
TODOs:
- [x] Add tests like
5407c69028/orttraining/orttraining/test/python/orttraining_test_ortvalue.py
that's unrelated to training feature
---------
Co-authored-by: Xavier Dupré <xadupre@users.noreply.github.com>
Co-authored-by: Justin Chu <justinchuby@users.noreply.github.com>
Add overload of `TryParseStringWithClassicLocale()` that uses `std::from_chars()` for certain types.
Reduce binary size. It recently increased after PR #23526.
Fix the issue that the new generated EP context model not able to find external data
### Description
The new generated EP context model was not able to find the external data file because it lost track of the source model path which used to locate the external initializers.
Relate to issue: https://github.com/microsoft/onnxruntime/issues/23358
### Description
After some investigation and debug, I decided to follow the recommended
workaround as suggested in https://github.com/vitejs/vite/issues/8427.
### Motivation and Context
There is a known issue with Vite 5.x when using WebAssembly package.
Detail information is in https://github.com/vitejs/vite/issues/8427.
There are previous attempts to fix this problem (#23487). I tried
various ways to make it working out of the box for Vite users but none
of them worked: Some "fixes" did fix the usage of Vite but broke other
use case/bundler and some introduced other issues. Eventually I figured
out that there is no good way to fix this inside ONNX Runtime.
Considering the root cause is inside Vite and it may be fixed in Vite
v6. I think now the best way is to follow the recommended workaround.
Fix tensor external data info length parsing issue.
The old implementation was parsing a `size_t` value with `strtol` (via `OrtStrToPtrDiff`) on ARM64 MSVC.
bf023ab3d5/onnxruntime/core/platform/path_lib.h (L74)
If we have `sizeof(size_t) == 8` and `sizeof(long) == 4` (as is the case for x64 and ARM64 MSVC), `strtol` will return a maximum value of `2^31-1` even for a larger, valid `size_t` value. `strtol` will also set `errno` to `ERANGE`, but we weren't checking that.
Updated to use `ParseStringWithClassicLocale` which will parse directly to the target type.
Added some tests.
Remove inline default transposeHelper and ensure we use the proper check
via CanUse_hipBlasTransposeHelper_MLFloat16
Related to change in ROCm Onnxruntime repo:
https://github.com/ROCm/onnxruntime/pull/82
### Description
Required to correctly limit grid size of transpose helper kernel
### Motivation and Context
Compile was defaulting to the inline constructor that was removed
instead of using the overloaded case with proper checks.
Removed the inline default "true" case as this is incorrect for newer
AMD cards/targets
Co-authored-by: Ted Themistokleous <tedthemistokleous@amd.com>