Microsoft.ML.OnnxRuntime is not built with the Release configuration but
RelWithDebInfo which is not recognized by the MSBuild SDK. Consequently,
the optimizations are not enabled. A fix would be to simply force the
configuration to be Release when building the .NET code even if it was
set to RelWithDebInfo in the command line arguments but I could not find
an easy way to do that. Instead, I try to mimic the behavior of the
Release configuration by setting the optimize property.
I can see a 15% performance improvement using this simple model summing
up the 3 inputs:
```csharp
using System.Buffers;
using System.Collections.Frozen;
using System.Net;
using System.Net.Sockets;
using System.Runtime.CompilerServices;
using System.Runtime.InteropServices;
using System.Text;
using System.Text.RegularExpressions;
using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Configs;
using BenchmarkDotNet.Running;
using Microsoft.ML.OnnxRuntime;
var config = DefaultConfig.Instance; //.WithOptions(ConfigOptions.DisableOptimizationsValidator);
BenchmarkSwitcher.FromAssembly(typeof(Program).Assembly).Run(args, config);
public class OnnxBench
{
private const int Iterations = 100_000;
private const int BatchSize = 50;
private InferenceSession _session = default!;
private string[] _inputNames = default!;
private OrtValue[] _inputValues = default!;
private RunOptions _runOptions = default!;
[GlobalSetup]
public void GlobalSetup()
{
using SessionOptions sessionOptions = new();
sessionOptions.InterOpNumThreads = 1;
sessionOptions.IntraOpNumThreads = 1;
sessionOptions.GraphOptimizationLevel = GraphOptimizationLevel.ORT_ENABLE_ALL;
sessionOptions.ExecutionMode = ExecutionMode.ORT_SEQUENTIAL;
_session = new InferenceSession(
Convert.FromBase64String("CAo6cAoOCgFBCgFCEgFEIgNBZGQKDgoBQwoBRBIBWCIDQWRkEgJscloRCgFBEgwKCggBEgYKAAoCCAFaEQoBQhIMCgoIARIGCgAKAggBWhEKAUMSDAoKCAESBgoACgIIAWIRCgFYEgwKCggBEgYKAAoCCAFCBAoAEBU="),
sessionOptions);
_inputNames = ["A", "B", "C"];
_inputValues =
[
OrtValue.CreateTensorValueFromMemory(new float[BatchSize], [BatchSize, 1]),
OrtValue.CreateTensorValueFromMemory(new float[BatchSize], [BatchSize, 1]),
OrtValue.CreateTensorValueFromMemory(new float[BatchSize], [BatchSize, 1]),
];
_runOptions = new RunOptions();
}
[Benchmark(OperationsPerInvoke = Iterations)]
public float Run()
{
var inputValues0Span = _inputValues[0].GetTensorMutableDataAsSpan<float>();
var inputValues1Span = _inputValues[1].GetTensorMutableDataAsSpan<float>();
var inputValues2Span = _inputValues[2].GetTensorMutableDataAsSpan<float>();
for (int i = 0; i < BatchSize; i += 1)
{
inputValues0Span[i] = Random.Shared.NextSingle();
inputValues1Span[i] = Random.Shared.NextSingle();
inputValues2Span[i] = Random.Shared.NextSingle();
}
float sum = 0f;
for (int i = 0; i < Iterations; i += 1)
{
using var output = _session.Run(_runOptions, _inputNames, _inputValues, _session.OutputNames);
ReadOnlySpan<float> outputData = output[0].GetTensorDataAsSpan<float>();
for (int j = 0; j < outputData.Length; j += 1)
{
sum += outputData[j];
}
}
return sum;
}
}
```
| Method | Mean | Error | StdDev |
|------- |---------:|----------:|----------:|
| Before | 5.003 us | 0.0318 us | 0.0297 us |
| After | 4.325 us | 0.0568 us | 0.0503 us |
Fix#16203
Previous to this PR, if `ceil_mode` is on, the calculation of a value
would divide the kernel size, even if remaining pixels is less than the
kernel size, which causes the difference in this operator between ORT
and torch.
However, this fix only applies to the change in #15597, which only
supports AvgPool since 19. The older opset version is remain the same,
as it's using mlas files.
Also, the PR fixes the shape mismatch caused by sliding window starting
from padding. More detail: https://github.com/onnx/onnx/pull/6650 (And
this PR is also validated with the tests added in
https://github.com/onnx/onnx/pull/6650)
### Description
Adds `from __future__ import annotations` to python script to support
annotations on Python 3.8.
### Motivation and Context
Pipeline that runs this script is using Ubuntu 20.04's default python
version (3.8), which does not support annotations unless one imports
from __future__.
### Description
Fixes QNN EP builds due to missing function in provider bridge API:
`logging::LoggingManager::HasDefaultLogger()`
### Motivation and Context
A [recent PR](https://github.com/microsoft/onnxruntime/pull/23120) made
QNN EP a shared library. A [different
PR](https://github.com/microsoft/onnxruntime/pull/23435) added use of a
new function to QNN EP that was not part of the provider bridge API. The
CI did not catch it because main was not merged into the first PR before
merging.
### Description
- Makes QNN EP a shared library **by default** when building with
`--use_qnn` or `--use_qnn shared_lib`. Generates the following build
artifacts:
- **Windows**: `onnxruntime_providers_qnn.dll` and
`onnxruntime_providers_shared.dll`
- **Linux**: `libonnxruntime_providers_qnn.so` and
`libonnxruntime_providers_shared.so`
- **Android**: Not supported. Must build QNN EP as a static library.
- Allows QNN EP to still be built as a static library with `--use_qnn
static_lib`. This is primarily for the Android QNN AAR package.
- Unit tests run for both the static and shared QNN EP builds.
### Detailed changes
- Updates Java bindings to support both shared and static QNN EP builds.
- Provider bridge API:
- Adds logging sink ETW to the provider bridge. Allows EPs to register
ETW callbacks for ORT logging.
- Adds a variety of methods for onnxruntime objects that are needed by
QNN EP.
- QNN EP:
- Adds `ort_api.h` and `ort_api.cc` that encapsulates the API provided
by ORT in a manner that allows the EP to be built as either a shared or
static library.
- Adds custom function to transpose weights for Conv and Gemm (instead
of adding util to provider bridge API).
- Adds custom function to quantize data for LeakyRelu (instead of adding
util to provider bridge API).
- Adds custom ETW tracing for QNN profiling events:
- shared library: defines its own TraceLogging provider handle
- static library: uses ORT's TraceLogging provider handle and existing
telemetry provider.
- ORT-QNN Packages:
- **Python**: Pipelines build QNN EP as a shared library by default.
User can build a local python wheel with QNN EP as a static library by
passing `--use_qnn static_lib`.
- **NuGet**: Pipelines build QNN EP as a shared library by default.
`build.py` currently enforces QNN EP to be built as a shared library.
Can add support for building a QNN NuGet package with static later if
deemed necessary.
- **Android**: Pipelines build QNN EP as a **static library**.
`build.py` enforces QNN EP to be built as a static library. Packaging
multiple shared libraries into an Android AAR package is not currently
supported due to the added need to also distribute a shared libcpp.so
library.
### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
### Description
Add custom vcpkg ports for the following packages:
1. cpuinfo
2. onnx
3. pthreadpool
4. xnnpack
Because:
- The cpuinfo/pthreadpool/xnnpack packages in the official vcpkg repo
are too old.
- XNNPack's version is updated from 2022-12-22 to 2025-01-17
- CPUINFO's version is updated from 2022-07-19 to 2024-12-09
- Pthreadpool's version is updated from 2020-04-10 to 2024-12-17, and
the source code location is changed from
https://github.com/Maratyszcza/pthreadpool to
https://github.com/google/pthreadpool
- The onnx package in the official repo requires building python from
source, which then requires a lot of additional dependencies to be
installed. This PR removes them.
- Added a disable_gcc_warning.patch file for xnnpack for addressing the
issue reported in https://github.com/google/XNNPACK/issues/7650. I will
remove this patch when the issue is fully addressed.
- Added " -DONNX_DISABLE_STATIC_REGISTRATION=ON" to ONNX's config
options.
-
### Description
This PR updates the triplets files that manage the compile flags for
vcpkg packages.
All the changes are autogenerated except for the gen.py file in this PR.
Main changes:
1. Enable debug info for all Linux build config(Release and Debug)
2. Set CMAKE_CXX_STANDARD in each triplet. The value is set to 20 for
macOS targets and 17 for the others.
3. Only set _FORTIFY_SOURCE in release build. This is to address a build
issue on some platforms with the following glibc change:
"Warn if user requests __FORTIFY_SOURCE but it is disabled"
https://sourceware.org/git/?p=glibc.git;a=commit;f=include/features.h;h=05c2c9618f583ea4acd69b3fe5ae2a2922dd2ddc
### Motivation and Context
Address a Linux build error.
### Description
Add test project that will perform an automated UI test that runs the
unit tests on Android.
### Motivation
- Enables end-to-end on-device MAUI unit testing which we want to add to
the packaging pipelines
### Context
Microsoft.ML.OnnxRuntime.Tests.MAUI uses DeviceRunners.VisualRunners to
allow running the unit tests (found in
Microsoft.ML.OnnxRuntime.Tests.Common) across multiple devices.
DeviceRunners.VisualRunners provides a simple UI with a button that will
run the unit tests and a panel with the unit test results.
In order to automate the process of running the unit tests across mobile
devices, Appium is used for UI testing orchestration (it provides a way
to interact with the UI), and BrowserStack automatically runs these
Appium tests across different mobile devices.
This project does not include the capability to start an Appium server
locally or attach to a local emulator or device.
## Build & run instructions
### Requirements
* A BrowserStack account with access to App Automate
* You can set BrowserStack credentials as environment variables as shown
[here](https://www.browserstack.com/docs/app-automate/appium/getting-started/c-sharp/nunit/integrate-your-tests#CLI)
* ONNXRuntime NuGet package
1. You can either download the [stable NuGet
package](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime) then
follow the instructions from [NativeLibraryInclude.props
file](../Microsoft.ML.OnnxRuntime.Tests.Common/NativeLibraryInclude.props)
to use the downloaded .nupkg file
2. Or follow the [build
instructions](https://onnxruntime.ai/docs/build/android.html) to build
the Android package locally
* The dotnet workloads for maui and maui-android, which will not always
automatically install correctly
1. `dotnet workload install maui`
2. `dotnet workload install maui-android`
* [Appium](https://appium.io/docs/en/latest/quickstart/) and the
[UiAutomator2
driver](https://appium.io/docs/en/latest/quickstart/uiauto2-driver/)
### Run instructions
1. Build the Microsoft.ML.OnnxRuntime.Tests.MAUI project into a signed
APK.
1. Run the following: `dotnet publish -c Release -f net8.0-android` in
the Microsoft.ML.OnnxRuntime.Tests.MAUI directory.
2. Search for the APK files generated. They should be located in
`bin\Release\net8.0-android\publish`.
3. If they're in a different location, edit the `browserstack.yml` file
to target the path to the signed APK.
2. Ensure you've set the BrowserStack credentials as environment
variables.
3. Run the following in the
Microsoft.ML.OnnxRuntime.Tests.Android.BrowserStack directory: `dotnet
test`
4. Navigate to the [BrowserStack App Automate
dashboard](https://app-automate.browserstack.com/dashboard/v2/builds) to
see your test running!
BUG #23273
This PR does below optimizations:
1. When output channels is one, 1) calculate the offset before the
inchannel loop to reduce indices to offsets calculation, 2) split the
`inputChannelsPerGroup` into `inputChannelsPerGroupInt` and
`inputChannelsRemainder` parts so that we can always access 4 data for
`inputChannelsPerGroupInt`.
2. Use precise initial value to reduce useless loop iterations. Thanks
@jiangzhaoming 's suggestion's on this.
With this PR, ConvTranspose becomes 3.7s from 8.4s on Intel Meteor Lake.
On NV RTX 2000 Ada, it becomes 1.6s from 2.7s.
### Description
Use onnx_protobuf.h to suppress some GCC warnings.
All the changes are autogenerated by a shell command.
```bash
find . -type f -exec sed -i 's/#include\s\+<onnx\/onnx_pb.h>/#include "core\/graph\/onnx_protobuf.h"/g' {} \;
```
### Motivation and Context
This PR is needed for making vcpkg work(without disabling all warnings)
This PR is split from another bigger PR per request from a reviewer.
### Description
Suppress some strict-aliasing related warnings in WebGPU EP
For example:
```
/home/chasun/src/onnxruntime/onnxruntime/core/providers/webgpu/math/unary_elementwise_ops.cc:208:30: error: dereferencing type-punned pointer will break strict-aliasing rules [-Werror=strict-aliasing]
208 | float encoded_value = *reinterpret_cast<const float*>(attr);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
```
This PR does not really fix the problems. It just suppresses the
warnings to make build pass. Some issues related to strict aliasing may
be fixed by using std::bit_cast, which requires c++20 however.
### Motivation and Context
Build the code on Azure Linux 3 fails. To reproduce the issue, you may
get an AzureLinux3 machine and run:
```
python3 tools/ci_build/build.py --update --build --build_wheel --use_xnnpack --build_nodejs --use_webgpu --build_dir b --skip_submodule_sync --parallel --use_binskim_compliant_compile_flags --build_shared_lib --config Release
```
The WebNN CPU device type may now target different backends, such as
CoreML. Legacy special workarounds for the TFLite backend should be
removed and allowed to fail as is, as these are implementation issues.
Additionally, the WebNN EP should adhere to the WebNN API conformance.
We assume all the WebNN ops should be supported, so remove the WebNN op
support status for different device types in webnn-operators.md as well.
### Description
<!-- Describe your changes. -->
### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
### Description
Re-implementation of https://github.com/microsoft/onnxruntime/pull/23320
(which was reverted).
- Cleans up QNN logging resources if an error occurs during
initialization.
- Updates `QnnLogging()`, which is a logging callback called by QNN
libs, to handle situations in which ORT logging is unavailable, thus
avoiding a segmentation fault.
- Updates `QnnBackendManager::CreateHtpPowerCfgId()` and
`QnnBackendManager::SetHtpPowerConfig()` to check that backend setup is
complete. These functions get called in QNN EP's `OnRunStart()` even if
QNN backend setup failed and the model is assigned to a different EP.
This prevents a segmentation fault. Our Android tests ran into this
issue because the QNN backend setup failed, the model was then assigned
to CPU EP, and the QNN EP's `OnRunStart()` was still called with an
invalid backend.
### Motivation and Context
If QNN initialization fails at any point, we have to properly clean up
the logging resources so that QNN does not call our `QnnLogging()`
callback after the EP has been destroyed.
Bumps [clang-format](https://github.com/ssciwr/clang-format-wheel) from
19.1.6 to 19.1.7.
<details>
<summary>Commits</summary>
<ul>
<li><a
href="f865928dd2"><code>f865928</code></a>
Bump to v19.1.7</li>
<li>See full diff in <a
href="https://github.com/ssciwr/clang-format-wheel/compare/v19.1.6...v19.1.7">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
### Description
Moving Android E2E test steps from Mac-OS13 to unbunt22.04
### Motivation and Context
Deduced the dependency on MacOS, which is deprecating the x64 version.
* [CPU EP] Implement Add/Sub/Mul/Div element wise operations for
(u)int8, (u)int16, uint32 and uint64.
* [CPU EP] Implement Neg unary operation for int16
* [CUDA EP] Implement Add/Sub/Mul/Div element wise operations for
(u)int8 and (u)int16
### Motivation and Context
This solves https://github.com/microsoft/onnxruntime/issues/23051
### Description
- Fix a type cast in
https://github.com/microsoft/onnxruntime/pull/23363.
- Include some headers which are suggested by code scanning in that PR.
### Motivation and Context
PostMerge has build error:
```
onnxruntime\core\framework\print_tensor_statistics_utils.h(92,55): error C2220: the following warning is treated as an error [D:\a\_work\1\b\Debug\onnxruntime_framework.vcxproj]
```
### Description
<!-- Describe your changes. -->
When the onnx model reuses initializers in more than one ops, if one of
the ops wants to add this initializer to the skipped list, but the other
ops still need this initializer, it will cause the process to crash.
Therefore, like other EPs, we count `initializer_usage_`, the number of
occurrences of each initializer in all ops and modify the
`AddInitializersToSkip` to minus the corresponding initializers'
statistic one when adding the specific operators.
### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
1. Update onnxruntime binary size checks ci pipeline's docker image. Use
a different docker image that is not manylinux based. The new one is
smaller.
2. Add flatbuffers tools/ci_build/requirements/pybind/requirements.txt
3. Delete
tools/ci_build/github/azure-pipelines/py-package-build-pipeline.yml. The
pipeline was for generating packages for Olive, but it went unused. And
the content is highly duplicated with our official python packaging
pipeline.
4. A lot of YAML files reference pypa/manylinux git repo but do not use
it. This PR removes the references.
### Description
<!-- Describe your changes. -->
This reverts commit 5d215ff810.
### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
The reverted change causes a packaging pipeline to fail due to a crash
in one of the E2E Android tests.
Reverting this first to fix the pipeline. We should come up with an
alternative way to properly do the necessary clean up.
### Description
the `std::unordered_map` uses a `std::string_view` as key, while the
string view may refer to invalid memory. Function `IdentityBuilder`
returns a `std::string` which goes out of scope quickly.
```c++
unordered_map<string_view, std::vector<NodeIndex>> identical_children_map;
for (auto i = node->OutputEdgesBegin(); i != node->OutputEdgesEnd(); ++i) {
if (i->GetNode().OpType() == op) {
identical_children_map[IdentityBuilder(graph, i->GetNode())].push_back(i->GetNode().Index());
}
}
```
This code will cause a waring as error in EMSDK v4.0.1:
```
C:/code/o2/onnxruntime/core/optimizer/identical_children_consolidation.cc:51:30: error: object whose reference is captured by 'identical_children_map' will be destroyed at the end of the full-expression [-Werror,-Wdangling-capture]
51 | identical_children_map[IdentityBuilder(graph, i->GetNode())].push_back(i->GetNode().Index());
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1 error generated.
```
### Description
- Fixes segfault when the function that cleans up HTP memory handles
uses an invalid Logger.
- Fixes unit test that compares output from QNN EP with exact float
values. QNN HTP runs float32 models with float16 precision, so need to
use a tolerance in the comparison.
### Motivation and Context
Fixes issues with using QNN HTP memory sharing on Windows ARM64. This is
also needed to test HTP shared memory with
https://github.com/microsoft/onnxruntime/pull/23120.
### Description
<!-- Describe your changes. -->
The old `GetCapability` function of WebNN EP is just a very simple
search for groups of nodes that can be handled. This doesn't work well
in the following example graph, where A and D could be handled by the
EP, but B is between them in the topological order, as you get two
single node capabilities. However, it may also be advantageous if C and
E could be handled by the EP, since they would be combined with D even
though they are not connected.
```
A B C
| / |
D E
| |
```
Therefore, we improve partitioning results by reusing
`utils::CreateSupportedPartitions`, which walks the edges for each node
that the EP can handle as they are iterated in topological order. This
would guarantee that all connected nodes that can be handled are grouped
together. Correspondingly, we modify the `webnn::GetSupportedNodes`
function to return the supported nodes instead of the group of supported
partitions.
### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
Co-authored-by: Dwayne Robinson <fdwr@hotmail.com>
Add a tool to generate node_block_list used in [float16 conversion tool](04030f64be/onnxruntime/python/tools/transformers/float16.py (L175)).
Previously, we have a feature to dump statistics data (like min, max) of
each node input/output. However, it is time consuming to generate a list
of nodes that need to be kept in float32 when model is large.
This could help speed up the process by outputting a list of nodes that
have potential overflow in float-to-half conversion.
Usage is to build onnxruntime from source with ` --cmake_extra_defines
onnxruntime_DEBUG_NODE_INPUTS_OUTPUTS=1`, then set some environment
variables before running float32 optimized onnx model like:
```
export ORT_DEBUG_NODE_IO_DUMP_HALF_CONVERSION_OVERFLOW=1
export ORT_DEBUG_NODE_IO_HALF_OVERFLOW_THRESHOLD=50000
python benchmark.py -e optimum --height 1024 --width 1024 --steps 3 -b 1 -v Flux.1D -p flux1_dev_onnx/fp32_opt --skip_warmup
```
The threshold `ORT_DEBUG_NODE_IO_HALF_OVERFLOW_THRESHOLD` shall be <=
65504. The default value is 50000 if the environment variable is not
set. It is better to leave some margin if number of samples are not
large enough in the test.
As a demo, we add an option --skip_warmup to benchmark.py for Flux, so
that we can reduce the time on dumping warm-up runs.
Example snippet of stdout (each inference session has such a summary
when session ended):
```
Total counter in node dumping: 141
Found 2 nodes cannot be converted to half precision due to potential input/output overflow.
Operator frequencies for these nodes:
Softmax : 1
MatMul : 1
# -------
# Example python script for float16 conversion
# For details, search `node_block_list` in https://github.com/microsoft/onnxruntime/blob/main/onnxruntime/python/tools/transformers/float16.py
# -------
from onnxruntime.transformers.onnx_model import OnnxModel
m = OnnxModel(onnx.load('flux1_dev_onnx/fp32_opt/vae_decoder/model.onnx'))
node_block_list = [
'/decoder/mid_block/attentions.0/Softmax',
'/decoder/mid_block/attentions.0/MatMul',
]
m.convert_float_to_float16(keep_io_types=False, node_block_list=node_block_list)
m.save_model_to_file('fp16/optimized.onnx', use_external_data_format=False)
```
Then you can use the python script to convert corresponding model to
float16.
### Motivation and Context
It is a tool used to generate node_block_list used in float16 conversion
of stable diffusion 3.x and flux models in
https://github.com/microsoft/onnxruntime/pull/22986.
In stable diffusion or Flux pipeline, there are multiple models and
there could be multiple session runs for each model. Without a proper
tool, it is time consuming to get node_block_list for each model.
### Description
Follw up #21897
To be compatible with onnx 17.0, Registering opset 22 is required in
terms of the [updated operators
(bfloat16)](https://github.com/onnx/onnx/releases/tag/v1.17.0)
### Motivation and Context
Fix#23162Fix#23161Fix#23164 (Xnnpack)
### Remaining issue
#23163 (QNN) See [the
file](https://github.com/microsoft/onnxruntime/pull/23344/files#diff-04f5d6db0a6873f7299ed06ff1ec45a49e69f0865cb32f4397cd56db0cd0a784)
### Result of `find_optimizer_opset_version_updates_required.py (cpu
only)`
```
[WARNING] - Newer opset found for kOnnxDomain.Conv. Latest:22 Optimizer support ends at 11. File:/home/titaiwang/onnxruntime/onnxruntime/core/optimizer/conv_add_fusion.cc
[WARNING] - Newer opset found for kOnnxDomain.IsInf. Latest:20 Optimizer support ends at 10. File:/home/titaiwang/onnxruntime/onnxruntime/core/optimizer/isinf_reducesum_fusion.cc
[WARNING] - Newer opset found for kOnnxDomain.Cast. Latest:21 Optimizer support ends at 19. File:/home/titaiwang/onnxruntime/onnxruntime/core/optimizer/isinf_reducesum_fusion.cc
[WARNING] - Newer opset found for kOnnxDomain.Cast. Latest:21 Optimizer support ends at 19. File:/home/titaiwang/onnxruntime/onnxruntime/core/optimizer/isinf_reducesum_fusion.cc
[WARNING] - Newer opset found for kOnnxDomain.HardSigmoid. Latest:22 Optimizer support ends at 6. File:/home/titaiwang/onnxruntime/onnxruntime/core/optimizer/conv_add_act_fusion.cc
[WARNING] - Newer opset found for kOnnxDomain.Cast. Latest:21 Optimizer support ends at 19. File:/home/titaiwang/onnxruntime/onnxruntime/core/optimizer/layer_norm_fusion.cc
[WARNING] - Newer opset found for kOnnxDomain.Cast. Latest:21 Optimizer support ends at 19. File:/home/titaiwang/onnxruntime/onnxruntime/core/optimizer/layer_norm_fusion.cc
[WARNING] - Newer opset found for kOnnxDomain.Cast. Latest:21 Optimizer support ends at 19. File:/home/titaiwang/onnxruntime/onnxruntime/core/optimizer/layer_norm_fusion.cc
[WARNING] - Newer opset found for kOnnxDomain.Cast. Latest:21 Optimizer support ends at 19. File:/home/titaiwang/onnxruntime/onnxruntime/core/optimizer/layer_norm_fusion.cc
[WARNING] - Newer opset found for kOnnxDomain.Cast. Latest:21 Optimizer support ends at 19. File:/home/titaiwang/onnxruntime/onnxruntime/core/optimizer/layer_norm_fusion.cc
[WARNING] - Newer opset found for kOnnxDomain.Cast. Latest:21 Optimizer support ends at 19. File:/home/titaiwang/onnxruntime/onnxruntime/core/optimizer/layer_norm_fusion.cc
[WARNING] - Newer opset found for kOnnxDomain.Transpose. Latest:21 Optimizer support ends at 13. File:/home/titaiwang/onnxruntime/onnxruntime/core/optimizer/nchwc_transformer.cc
[WARNING] - Newer opset found for kOnnxDomain.Conv. Latest:22 Optimizer support ends at 11. File:/home/titaiwang/onnxruntime/onnxruntime/core/optimizer/nchwc_transformer.cc
[WARNING] - Newer opset found for kOnnxDomain.MaxPool. Latest:22 Optimizer support ends at 12. File:/home/titaiwang/onnxruntime/onnxruntime/core/optimizer/nchwc_transformer.cc
[WARNING] - Newer opset found for kOnnxDomain.AveragePool. Latest:22 Optimizer support ends at 11. File:/home/titaiwang/onnxruntime/onnxruntime/core/optimizer/nchwc_transformer.cc
[WARNING] - Newer opset found for kOnnxDomain.BatchNormalization. Latest:15 Optimizer support ends at 14. File:/home/titaiwang/onnxruntime/onnxruntime/core/optimizer/nchwc_transformer.cc
[WARNING] - Newer opset found for kOnnxDomain.Transpose. Latest:21 Optimizer support ends at 13. File:/home/titaiwang/onnxruntime/onnxruntime/core/optimizer/nchwc_transformer.cc
[WARNING] - Newer opset found for kOnnxDomain.Upsample. Latest:10 Optimizer support ends at 13. File:/home/titaiwang/onnxruntime/onnxruntime/core/optimizer/nchwc_transformer.cc
[WARNING] - Newer opset found for kOnnxDomain.Resize. Latest:19 Optimizer support ends at 13. File:/home/titaiwang/onnxruntime/onnxruntime/core/optimizer/nchwc_transformer.cc
[WARNING] - Newer opset found for kOnnxDomain.GlobalMaxPool. Latest:22 Optimizer support ends at 1. File:/home/titaiwang/onnxruntime/onnxruntime/core/optimizer/nchwc_transformer.cc
[WARNING] - Newer opset found for kOnnxDomain.GlobalAveragePool. Latest:22 Optimizer support ends at 1. File:/home/titaiwang/onnxruntime/onnxruntime/core/optimizer/nchwc_transformer.cc
[WARNING] - Newer opset found for kOnnxDomain.Shape. Latest:21 Optimizer support ends at 19. File:/home/titaiwang/onnxruntime/onnxruntime/core/optimizer/pre_shape_node_elimination.cc
[WARNING] - Newer opset found for kOnnxDomain.Conv. Latest:22 Optimizer support ends at 11. File:/home/titaiwang/onnxruntime/onnxruntime/core/optimizer/conv_bn_fusion.cc
[ERROR] - Call/Declaration is split over multiple lines. Please check manually.File:/home/titaiwang/onnxruntime/onnxruntime/core/optimizer/label_encoder_fusion.cc Line:49
[ERROR] - Failed to find version information for "ai.onnx.ml".LabelEncoder. File:/home/titaiwang/onnxruntime/onnxruntime/core/optimizer/label_encoder_fusion.cc
[WARNING] - Newer opset found for kOnnxDomain.HardSigmoid. Latest:22 Optimizer support ends at 6. File:/home/titaiwang/onnxruntime/onnxruntime/core/optimizer/conv_activation_fusion.cc
[WARNING] - Newer opset found for kOnnxDomain.Dropout. Latest:22 Optimizer support ends at 13. File:/home/titaiwang/onnxruntime/onnxruntime/core/optimizer/dropout_elimination.cc
[WARNING] - Newer opset found for kOnnxDomain.Transpose. Latest:21 Optimizer support ends at 13. File:/home/titaiwang/onnxruntime/onnxruntime/core/optimizer/gemm_transpose_fusion.cc
[WARNING] - Newer opset found for kOnnxDomain.Transpose. Latest:21 Optimizer support ends at 13. File:/home/titaiwang/onnxruntime/onnxruntime/core/optimizer/gemm_transpose_fusion.cc
[ERROR] - Symbolic name of 'ignorable_nodes[index].first' found for op. Please check manually. File:/home/titaiwang/onnxruntime/onnxruntime/core/optimizer/matmul_bn_fusion.cc
[ERROR] - Symbolic name of 'dest.first' found for op. Please check manually. File:/home/titaiwang/onnxruntime/onnxruntime/core/optimizer/matmul_bn_fusion.cc
[WARNING] - Newer opset found for kOnnxDomain.Conv. Latest:22 Optimizer support ends at 11. File:/home/titaiwang/onnxruntime/onnxruntime/core/optimizer/pad_fusion.cc
[WARNING] - Newer opset found for kOnnxDomain.AveragePool. Latest:22 Optimizer support ends at 19. File:/home/titaiwang/onnxruntime/onnxruntime/core/optimizer/pad_fusion.cc
[WARNING] - Newer opset found for kOnnxDomain.MaxPool. Latest:22 Optimizer support ends at 12. File:/home/titaiwang/onnxruntime/onnxruntime/core/optimizer/pad_fusion.cc
[WARNING] - Newer opset found for kOnnxDomain.Pad. Latest:21 Optimizer support ends at 19. File:/home/titaiwang/onnxruntime/onnxruntime/core/optimizer/pad_fusion.cc
[WARNING] - Newer opset found for kOnnxDomain.Cast. Latest:21 Optimizer support ends at 13. File:/home/titaiwang/onnxruntime/onnxruntime/core/optimizer/pad_fusion.cc
[WARNING] - Newer opset found for kOnnxDomain.Dropout. Latest:22 Optimizer support ends at 13. File:/home/titaiwang/onnxruntime/onnxruntime/core/optimizer/bias_dropout_fusion.cc
[ERROR] - Failed to find version information for kMSDomain.BitmaskDropout. File:/home/titaiwang/onnxruntime/onnxruntime/core/optimizer/bias_dropout_fusion.cc
[WARNING] - Newer opset found for kOnnxDomain.Clip. Latest:13 Optimizer support ends at 6. File:/home/titaiwang/onnxruntime/onnxruntime/core/optimizer/relu_clip_fusion.cc
[WARNING] - Newer opset found for kOnnxDomain.Cast. Latest:21 Optimizer support ends at 19. File:/home/titaiwang/onnxruntime/onnxruntime/core/optimizer/fast_gelu_fusion.cc
[WARNING] - Newer opset found for kOnnxDomain.Cast. Latest:21 Optimizer support ends at 19. File:/home/titaiwang/onnxruntime/onnxruntime/core/optimizer/fast_gelu_fusion.cc
[WARNING] - Newer opset found for kOnnxDomain.Reshape. Latest:21 Optimizer support ends at 14. File:/home/titaiwang/onnxruntime/onnxruntime/core/optimizer/reshape_fusion.cc
[ERROR] - Failed to find version information for kMSDomain.ConcatTraining. File:/home/titaiwang/onnxruntime/onnxruntime/core/optimizer/reshape_fusion.cc
[WARNING] - Newer opset found for kOnnxDomain.Where. Latest:16 Optimizer support ends at 9. File:/home/titaiwang/onnxruntime/onnxruntime/core/optimizer/not_where_fusion.cc
[WARNING] - Newer opset found for kOnnxDomain.Where. Latest:16 Optimizer support ends at 9. File:/home/titaiwang/onnxruntime/onnxruntime/core/optimizer/not_where_fusion.cc
[WARNING] - Newer opset found for kOnnxDomain.Conv. Latest:22 Optimizer support ends at 11. File:/home/titaiwang/onnxruntime/onnxruntime/core/optimizer/conv_mul_fusion.cc
[ERROR] - Symbolic name of 'QOpName' found for op. Please check manually. File:/home/titaiwang/onnxruntime/onnxruntime/core/optimizer/qdq_transformer/qdq_util.cc
[ERROR] - Symbolic name of 'QOpName' found for op. Please check manually. File:/home/titaiwang/onnxruntime/onnxruntime/core/optimizer/qdq_transformer/qdq_util.cc
[ERROR] - Symbolic name of 'DQOpName' found for op. Please check manually. File:/home/titaiwang/onnxruntime/onnxruntime/core/optimizer/qdq_transformer/qdq_util.cc
[ERROR] - Symbolic name of 'DQOpName' found for op. Please check manually. File:/home/titaiwang/onnxruntime/onnxruntime/core/optimizer/qdq_transformer/qdq_util.cc
[ERROR] - Call/Declaration is split over multiple lines. Please check manually.File:/home/titaiwang/onnxruntime/onnxruntime/core/optimizer/qdq_transformer/avx2_weight_s8_to_u8.cc Line:170
[WARNING] - Newer opset found for kOnnxDomain.MaxPool. Latest:22 Optimizer support ends at 12. File:/home/titaiwang/onnxruntime/onnxruntime/core/optimizer/qdq_transformer/qdq_propagation.cc
[ERROR] - Symbolic name of 'current_node.OpType(' found for op. Please check manually. File:/home/titaiwang/onnxruntime/onnxruntime/core/optimizer/compute_optimizer/upstream_transformer_base.cc
[WARNING] - Newer opset found for kOnnxDomain.Reshape. Latest:21 Optimizer support ends at 14. File:/home/titaiwang/onnxruntime/onnxruntime/core/optimizer/compute_optimizer/upstream_reshape.cc
[WARNING] - Newer opset found for kOnnxDomain.Transpose. Latest:21 Optimizer support ends at 13. File:/home/titaiwang/onnxruntime/onnxruntime/core/optimizer/attention_fusion_helper.h
```
Use ruff as the code formatter in place of black and isort since it is
much faster, and as projects like PyTorch and ONNX have adopted ruff
format as well.
This PR include only auto-fixed changes in formatting.
### Description
This PR allows WebGPU EP to be built with Emscripten for WebAssembly,
Including:
- cmake build files update to support correct setup for Emscripten.
- code changes to fix build breaks for wasm
- change in Web CI pipeline to add a build-only target for wasm with
`--use_webgpu`.
### Description
Docker's buildx has four different drivers:
1. default
2. docker-container
3. kubernetes
4. remote
Now we are using "docker-container". This PR change it to the default
driver, because the container driver needs to fetch an image from docker
hub which is no longer free and has a rate limit.