Commit graph

12327 commits

Author SHA1 Message Date
Changming Sun
e3006b68b5 update 2025-02-07 05:51:24 +00:00
Changming Sun
2654706f52 Add missing file 2025-02-07 05:19:50 +00:00
Changming Sun
9634ab5f24 Merge remote-tracking branch 'upstream/main' into snnn/vcpkg2 2025-02-07 05:12:48 +00:00
Changming Sun
4b73593792 Merge remote-tracking branch 'upstream/snnn/vcpkg2' into snnn/vcpkg2 2025-02-07 05:12:27 +00:00
Changming Sun
1bf0bdc765 update 2025-02-07 05:12:03 +00:00
Changming Sun
7358422abf update 2025-02-06 20:37:03 -08:00
Changming Sun
d3c07d4035 update 2025-02-07 04:31:46 +00:00
Changming Sun
63f586697a update 2025-02-06 19:18:53 -08:00
Changming Sun
758342aaa3 update 2025-02-06 17:26:38 -08:00
microsoft-github-policy-service[bot]
65008cbb73
Auto-generated baselines by 1ES Pipeline Templates (#23603) 2025-02-06 17:06:29 -08:00
Changming Sun
616f70a209 update 2025-02-06 16:58:22 -08:00
Tianlei Wu
09e5724f3b
[CUDA] Fix beam search of num_beams > 32 (#23599)
### Description
* Pass topk_scores to beam scorer in slow topk path.
* Add an env variable `ORT_BEAM_SEARCH_USE_FAST_TOPK` to enable/disable fast topk.
* Add a test case for slow topk path.

### Motivation and Context

This bug was introduced in
https://github.com/microsoft/onnxruntime/pull/16272

Beam search uses fast cuda kernel when number of beams <= 32. When beam
size is larger than that threshold, we use another code path (slower
cuda kernel) to get topk. In such `slow topk path`, topk_scores shall be
passed to beam scorer but it is not.

This bug will cause incorrect result when num_beams > 32. It was not
found previously since such large beam size is rarely used.
2025-02-06 16:50:31 -08:00
Changming Sun
d5ad3f8b84 update 2025-02-06 16:44:42 -08:00
Sushanth Rajasankar
82840f635d
Implement Flash Attention 2 for webgpu EP (#23576)
### Description
This change implements FlashAttention 2 for the webgpu EP for the MHA
operator.

Numbers from Alderlake device show a 2.2x speed up for prefill, which
considering that Attention is 50% of prefill phase (other 50% being
MatMul) implies 4x speed up for Attention with this implementation. This
is inline with the expected perf gain of 2-4x with FlashAttention over
regular attention.

```
Baseline
PS C:\onnxruntime> C:\model_benchmark\model_benchmark.exe -i C:\Phi-3.5-mini-instruct-onnx-web\Phi-3.5-mini-instruct-onnx-web\ -l 1000
Batch size: 1, prompt tokens: 1001, tokens to generate: 128
Prompt processing (time to first token):
        avg (us):       9.54997e+06   <<<<<
        avg (tokens/s): 104.817
        p50 (us):       9.49218e+06
        stddev (us):    251442
        n:              5 * 1001 token(s)
------
With FlashAttention 2
PS C:\onnxruntime> C:\model_benchmark\model_benchmark.exe -i C:\Phi-3.5-mini-instruct-onnx-web\Phi-3.5-mini-instruct-onnx-web\ -l 1000
Batch size: 1, prompt tokens: 1001, tokens to generate: 128
Prompt processing (time to first token):
        avg (us):       4.27937e+06     <<<<<
        avg (tokens/s): 233.913
        p50 (us):       4.27687e+06
        stddev (us):    5344.1
        n:              5 * 1001 token(s)
```

### Motivation and Context

On integrated GPUs memory bandwidth is premium, Flash attention makes
softmax computation (and therefore output attention vector computation)
a running operation instead of maintaining full QKt attention scores in
memory. As a result, we see significant improvements in prefill speed -
200% speed up measured here.

This change uses techniques from co-operative matrix multiply to use
registers from a subgroup for fast in register matrix multiply. Without
the co-operative matrix multiply technique ALD showed about 6.0s prefill
time.

Tested on ALD/TGL intel integrated and Nvidia 4070.

### Future Work
- Fine tuning and profiling optimizations.
- Current implement is for prefill only, a generation phase optimized
FA2 implementation is possible, however attention is a tiny part of the
generation phase.
2025-02-06 16:32:05 -08:00
Changming Sun
fa6aae9b56 Merge branch 'snnn/vcpkg2' of https://github.com/microsoft/onnxruntime into snnn/vcpkg2 2025-02-06 16:27:30 -08:00
Changming Sun
2eec561ec1 update 2025-02-06 16:27:19 -08:00
Changming Sun
ced85c02ac update 2025-02-06 23:35:55 +00:00
Changming Sun
db2d590a3e update 2025-02-06 15:33:30 -08:00
Changming Sun
4f9a34dd50 Merge remote-tracking branch 'upstream/main' into snnn/vcpkg2 2025-02-06 23:02:43 +00:00
Changming Sun
1182d315ec Merge remote-tracking branch 'upstream/main' into snnn/vcpkg2 2025-02-06 23:02:34 +00:00
Changming Sun
c34a1699ba update 2025-02-06 23:02:32 +00:00
Changming Sun
f3f95a94b6 format code 2025-02-06 23:01:59 +00:00
Changming Sun
4a25755687 update 2025-02-06 22:58:08 +00:00
Ankit Maheshkar
a6ea57b8f3
OpenVINO EP Weights Sharing Feature (#23553)
### Description
These changes are done to ensure that weight sharing happens between two model using session context option ep_weight_sharing.

Key changes introduced in this feature are:

Creating a shared context between two models Extracting external constant initializers and re labelling them back as
inputs to the model to allow weight loading in the direct blob. Creating EP Context Nodes when Subgraph partitioning is happening.

### Motivation and Context
This change was required to ensure that LLM with prefill and kvcache models can use the same share
The change was also required to ensure EP Context nodes can be formed even when model is being subgraph partitioned.

---------

Co-authored-by: jatinwadhwa921 <jatin.wadhwa@intel.com>
Co-authored-by: jatinwadhwa921 <110383850+jatinwadhwa921@users.noreply.github.com>
Co-authored-by: saurabh <saurabh1.kale@intel.com>
Co-authored-by: TejalKhade28 <tejal.khade@intel.com>
Co-authored-by: sfatimar <sahar.fatima@intel.com>
Co-authored-by: Javier E. Martinez <javier.e.martinez@intel.com>
Co-authored-by: Preetha Veeramalai <preetha.veeramalai@intel.com>
Co-authored-by: Eric Crawford <eric.r.crawford@intel.com>
2025-02-06 14:57:38 -08:00
Changming Sun
9c50a45ff8 Merge remote-tracking branch 'upstream/snnn/vcpkg2' into snnn/vcpkg2 2025-02-06 22:53:49 +00:00
Changming Sun
42de16f48d stash 2025-02-06 22:48:33 +00:00
Changming Sun
20b715a73c update 2025-02-06 13:43:41 -08:00
Tianlei Wu
2c2ff4aef9
[CUDA] Fix BeamSearchTest.DummyT5WithSequenceInputIds test failure in Windows (#23596)
### Description
BeamSearchTest.DummyT5WithSequenceInputIds failed in Windows due to
early stopping triggered. The cause is state.early_stopping_ is
interpreted as true in cuda kernel at some point, however printf still
show its value is false. The root cause is unknown.

Update the code to use early_stopping as template parameter seems walk
around the issue.

Other changes: 
* Add some debug code (will not be built into binary unless
DEBUG_GENERATION is fined) to assist debugging beam search scorer in
CUDA.
* Enable DummyT5WithSequenceInputIds test in CI. This test was not run
in Windows CUDA CI pipeline previously.

### Motivation and Context

Fix a unit test BeamSearchTest.DummyT5WithSequenceInputIds failure in
Windows.
2025-02-06 13:15:09 -08:00
Changming Sun
6c019c09b0 format code 2025-02-06 20:35:48 +00:00
Changming Sun
27f595a2d8 update 2025-02-06 12:34:12 -08:00
Joshua Lochner
d981b153d3
[webgpu/js] Optimize resize webgpu op & fix precision issues (#23591)
### Description
<!-- Describe your changes. -->

This PR is a follow-up to
https://github.com/microsoft/onnxruntime/pull/23488 and partially
improves upon https://github.com/microsoft/onnxruntime/issues/23403. It
does the following:
- Prevents unnecessary cache shader recompilation for 'nearest' resize
operation.
- Fixes precision (offset-by-one) errors with asymmetric coordinate
transform. When running the Kokoro TTS model, values for the
`/decoder/decoder/generator/f0_upsamp/Resize_output_0` results in
differences at the end bounds due to precision issues when dividing
21600 by 72 (should be 300, but seemingly results in 299.999, which
causes issues when flooring)

### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->

I did a deep dive over the weekend to try fix Kokoro TTS on WebGPU and
found that the above node had a large difference. Thinking this was a
major issue, I spent some time fixing it. Turns out, it only happens for
a small number of values, leading to high maximum error, but most values
are correct (as seen here).

BEFORE:
```
[/decoder/decoder/generator/f0_upsamp/Resize_output_0] atol: 78.6640682220459 | rtol: 24.13991587587724 | avgDiff: 0.009967932171121087 | medianDiff: 0.000030517578125
```

AFTER:
```
[/decoder/decoder/generator/f0_upsamp/Resize_output_0] atol: 0.0011138916015625 | rtol: 0.0020059924232260704 | avgDiff: 0.00008570214675873825 | medianDiff: 0.000030517578125
```

So, although it has a very small impact on the final output (waveform),
this bug could appear with other models in a more severe way.

BEFORE:
```
[waveform] atol: 0.04784199967980385 | rtol: 1366.0462001093495 | avgDiff: 0.0009544936942737713 | medianDiff: 0.00015346752479672432
```

AFTER:
```
[waveform] atol: 0.04775865003466606 | rtol: 1354.7002460360852 | avgDiff: 0.000954830244055033 | medianDiff: 0.00015274062752723694
```
2025-02-06 10:26:25 -08:00
Changming Sun
328a13c06d
Enable VCPKG in more pipelines (#23590)
### Description
Enable VCPKG in more pipelines
2025-02-06 10:10:31 -08:00
Yifan Li
6728d6085d
[TensorRT EP] support TensorRT 10.8-GA (#23592)
### Description
<!-- Describe your changes. -->



### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
2025-02-06 10:05:57 -08:00
Jambay Kinley
d1fb58b0f2
Quantization tool: Allow user to override calibrator's session EP (#23559)
### Description
The quantization calibrators have `execution_providers` attributes but
there is no way for a user to provide their own providers when using the
`quantize` or `quantize_static` functions. This PR adds a
`calibration_providers` parameter to allow users to specify the
execution providers to use during calibration. It is helpful when
quantizing large models which are slow to calibrate on the CPU.
- Chose `calibration_providers` as the name since there is the
docstrings refer to another `execution_provider`
169917b1e7/onnxruntime/python/tools/quantization/quantize.py (L204)

169917b1e7/onnxruntime/python/tools/quantization/quantize.py (L415)
which are not present anywhere in the code.
- Can change the name to something else if needed like
calibrator_providers, and/or make it into a string instead of a
providers list.
2025-02-05 22:38:21 -08:00
Hector Li
649ced4a60
Enable user loading model with external data from memory buffer (#23557)
Add session option to enable user loading model with external data from memory buffer. User want to set the folder path for the external data files.

### Description
For some cases user load the model from memory buffer, but they can't load the external files into memory. They need to have a way to set the folder path for the external data files so that Ort can figure out the external data location.
2025-02-05 22:31:13 -08:00
Satya Kumar Jandhyala
544bdd6073
Fix ConvTranspose for certain attribute combinations (#23488)
### Description
Convert output_padding attribute from 1D to 2D convtranspose



### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
https://github.com/microsoft/onnxruntime/issues/23403
2025-02-05 12:22:47 -08:00
Changming Sun
8f6ddf3bd5
Delete extra cgmanifest entries and files (#23583)
Remove the auto-generated cgmanifest.json. Because now we can get the
same information from vcpkg.
Also, remote some outdated entries in the main cgmanifest.json file.
2025-02-05 11:21:21 -08:00
Changming Sun
5f6a3158f8
Enable VCPKG in CI build (#23426)
### Description
1. Enable VCPKG flag in Windows CPU CI build pipelines. 
2. Increased the min supported cmake version from 3.26 to 3.28. Because
of it, drop the support for the old way of finding python by
"find_package(PythonLibs)". Therefore, in build.py we no longer set
"PYTHON_EXECUTABLE" cmake var when doing cmake configure.
3. Added "xnnpack-ep" as a feature for ORT's vcpkg config.
4. Added asset cache support for ORT's vcpkg build
5. Added VCPKG triplet files for Android build.
6. Set VCPKG triplet to "universal2-osx" if CMAKE_OSX_ARCHITECTURES was
found in cmake extra defines.
7. Removed a small piece of code in build.py, which was for support CUDA
version < 11.8.
8. Fixed an issue that CMAKE_OSX_ARCHITECTURES sometimes got specified
twice when build.py invoked cmake.
9. Added more model tests to Android build. After this change, we will
test all ONNX versions instead of just the latest one.
10. Fixed issues that are related to build.py's "--build_nuget"
parameter. Also, enable the flag in most Windows CPU CI build jobs.
11. Removed a restriction in build.py that disallowed cross-compiling
Windows ARM64 nuget package on Windows x86.
 
### Motivation and Context
Adopt vcpkg.
2025-02-05 10:58:53 -08:00
dependabot[bot]
e1e3f623f6
Bump lintrunner from 0.12.5 to 0.12.7 (#23326)
Bumps [lintrunner](https://github.com/suo/lintrunner) from 0.12.5 to
0.12.7.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/suo/lintrunner/blob/main/CHANGELOG.md">lintrunner's
changelog</a>.</em></p>
<blockquote>
<h2>[0.12.7] - 2024-12-05</h2>
<h3>Bug Fixes</h3>
<ul>
<li>Build x86_64 wheels for Windows (<a
href="a4d6b74693">a4d6b74</a>)</li>
<li>Fix <a href="https://doc.rust-lang.org/clippy/">Clippy</a>
violatoins (<a
href="05ff6431bb">05ff643</a>)</li>
<li>Fetch all commit history to fix MacOS builds (<a
href="3770be65ee">3770be6</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="1b70da01a6"><code>1b70da0</code></a>
chore(release): prep for 0.12.7</li>
<li><a
href="3770be65ee"><code>3770be6</code></a>
[CI] Fetch full commit history (<a
href="https://redirect.github.com/suo/lintrunner/issues/81">#81</a>)</li>
<li><a
href="b2482aff48"><code>b2482af</code></a>
[CI] Use <code>actions/checkout@v4</code> (<a
href="https://redirect.github.com/suo/lintrunner/issues/80">#80</a>)</li>
<li><a
href="05ff6431bb"><code>05ff643</code></a>
Fix clippy violations (<a
href="https://redirect.github.com/suo/lintrunner/issues/79">#79</a>)</li>
<li><a
href="1be20c6b8f"><code>1be20c6</code></a>
chore(release): prep for 0.12.6</li>
<li><a
href="a4d6b74693"><code>a4d6b74</code></a>
fix(build): build x86_64 wheels for Windows (<a
href="https://redirect.github.com/suo/lintrunner/issues/73">#73</a>)</li>
<li>See full diff in <a
href="https://github.com/suo/lintrunner/compare/v0.12.5...v0.12.7">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=lintrunner&package-manager=pip&previous-version=0.12.5&new-version=0.12.7)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-02-04 19:50:56 -08:00
Jon Campbell
cd8775f518
Fix Node JS Samples (#23581)
### Description
The Node JS Samples included in the repository have outdated package
references that are broken, which are fixed in this PR.

### Motivation and Context
The samples included in this repository should just work, but sadly do
not. The reason is that they are using very outdated references for the
npm modules. This fix updates the dependencies to the current
onnxruntime-node, which fixes the samples. Also adds a small update to
the .gitignore to exclude the node_modules directories in the samples
directory, which keeps the local repo changelist cleaner.
2025-02-04 19:50:29 -08:00
Prathik Rao
6b4f9c481d
[WebGPU EP] Batch Norm Implementation (#23525)
Increases operator coverage for webgpu ep.

---------

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-02-04 17:38:45 -08:00
Gavin Kinsey
1fce51b3b2
Fix all instances of 4244 and 4267 warnings in OV EP code (#23567)
### Description
Remove MSVC warnings 4244, 4267 from the list of disabled warnings in
cmake.
Fix the code that generates the warnings so that it no longer does.

### Motivation and Context
This makes onnxruntime_providers_openvino.dll pass BinSkim analysis.
Without this change BinSkim complains about the disabled warnings.
2025-02-04 17:13:27 -08:00
Hector Li
c29ca1cb41
Update QNN default version to 2.31 (#23573)
Update QNN default version to 2.31
2025-02-04 16:24:54 -08:00
Caroline Zhu
2fc75a45a2
[mobile] Add Android BrowserStack test project back (#23551)
## Description
Follow-up for #23383 and #23474

* Adds android BrowserStack test back in
* Modifies MAUI csproj file to build into an APK


### Motivation and Context
There were 2 issues with the previous PRs:
1. The updated MAUI .csproj file configuration failed when building to
iOS and MacCatalyst. This caused problems in the packaging pipeline
because we build all C# projects in the .soln file in the packaging
pipeline. Removed the Mac & iOS build targets for now

3. The previous MAUI .csproj file configuration did not build into an
APK. It was missing the `<OutputType>` XAML tag and the Android package
type XAML tag.
2025-02-04 14:39:50 -08:00
Tianlei Wu
9e18b6a0f3
[CUDA] Update nvcc flags (#23572)
### Description
(1) Remove `if (CMAKE_CUDA_COMPILER_VERSION VERSION_GREATER_EQUAL 11)`
since build requires cuda >= 11.4.
(2) Add sm_86 and sm_89 since we generate SASS code for specified cuda
architectures only. This change could support popular consumer GPUs
(like RTX 30X0 and RTX 40X0).
(3) Add sm_120 to support Blackwell GPUs (like RTX 50X0 etc).
(4) Add `-Xfatbin=-compress-all` to reduce wheel size. When
CMAKE_CUDA_ARCHITECTURES is not specified, the linux wheel size built by
CUDA 12.8 is reduced 8% (from 324MB to 299MB).

### Motivation and Context

To support popular consumer GPUs (RTX 30x0, 40x0, 50x0) in the default
setting. Reduce binary size.

Note that the default sm settings does not impact official released
binary. ORT official released binary are built with augmentation like
CMAKE_CUDA_ARCHITECTURES=75;80;90, which has both SASS (real) and PTX
(virtual) by default. See
https://cmake.org/cmake/help/latest/prop_tgt/CUDA_ARCHITECTURES.html for
more info.
2025-02-04 11:47:02 -08:00
Adrian Lizarraga
b47e1e64d7
[QNN EP] Make offloading graph input/output quantization (to CPU) the default (#23368)
### Description
Makes the QNN provider option `offload_graph_io_quantization` enabled by
default. It was previously disabled by default.



### Motivation and Context
Enabling this option significantly decreases inference latency for many
models.
2025-02-04 11:42:46 -08:00
Tianlei Wu
75a9b40da2
[ROCm] Update CI to use rocm 6.3.2 (#23577)
### Description
* Update rocm to 6.3.2;
* Remove dependency on cupy (which does not support rocm 6.3 yet).

### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
2025-02-04 11:01:12 -08:00
dependabot[bot]
26ff2b66ef
Bump ruff from 0.9.3 to 0.9.4 (#23563)
Bumps [ruff](https://github.com/astral-sh/ruff) from 0.9.3 to 0.9.4.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/astral-sh/ruff/releases">ruff's
releases</a>.</em></p>
<blockquote>
<h2>0.9.4</h2>
<h2>Release Notes</h2>
<h3>Preview features</h3>
<ul>
<li>[<code>airflow</code>] Extend airflow context parameter check for
<code>BaseOperator.execute</code> (<code>AIR302</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/15713">#15713</a>)</li>
<li>[<code>airflow</code>] Update <code>AIR302</code> to check for
deprecated context keys (<a
href="https://redirect.github.com/astral-sh/ruff/pull/15144">#15144</a>)</li>
<li>[<code>flake8-bandit</code>] Permit suspicious imports within stub
files (<code>S4</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/15822">#15822</a>)</li>
<li>[<code>pylint</code>] Do not trigger <code>PLR6201</code> on empty
collections (<a
href="https://redirect.github.com/astral-sh/ruff/pull/15732">#15732</a>)</li>
<li>[<code>refurb</code>] Do not emit diagnostic when loop variables are
used outside loop body (<code>FURB122</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/15757">#15757</a>)</li>
<li>[<code>ruff</code>] Add support for more <code>re</code> patterns
(<code>RUF055</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/15764">#15764</a>)</li>
<li>[<code>ruff</code>] Check for shadowed <code>map</code> before
suggesting fix (<code>RUF058</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/15790">#15790</a>)</li>
<li>[<code>ruff</code>] Do not emit diagnostic when all arguments to
<code>zip()</code> are variadic (<code>RUF058</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/15744">#15744</a>)</li>
<li>[<code>ruff</code>] Parenthesize fix when argument spans multiple
lines for <code>unnecessary-round</code> (<code>RUF057</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/15703">#15703</a>)</li>
</ul>
<h3>Rule changes</h3>
<ul>
<li>Preserve quote style in generated code (<a
href="https://redirect.github.com/astral-sh/ruff/pull/15726">#15726</a>,
<a
href="https://redirect.github.com/astral-sh/ruff/pull/15778">#15778</a>,
<a
href="https://redirect.github.com/astral-sh/ruff/pull/15794">#15794</a>)</li>
<li>[<code>flake8-bugbear</code>] Exempt <code>NewType</code> calls
where the original type is immutable (<code>B008</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/15765">#15765</a>)</li>
<li>[<code>pylint</code>] Honor banned top-level imports by
<code>TID253</code> in <code>PLC0415</code>. (<a
href="https://redirect.github.com/astral-sh/ruff/pull/15628">#15628</a>)</li>
<li>[<code>pyupgrade</code>] Ignore <code>is_typeddict</code> and
<code>TypedDict</code> for <code>deprecated-import</code>
(<code>UP035</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/15800">#15800</a>)</li>
</ul>
<h3>CLI</h3>
<ul>
<li>Fix formatter warning message for <code>flake8-quotes</code> option
(<a
href="https://redirect.github.com/astral-sh/ruff/pull/15788">#15788</a>)</li>
<li>Implement tab autocomplete for <code>ruff config</code> (<a
href="https://redirect.github.com/astral-sh/ruff/pull/15603">#15603</a>)</li>
</ul>
<h3>Bug fixes</h3>
<ul>
<li>[<code>flake8-comprehensions</code>] Do not emit
<code>unnecessary-map</code> diagnostic when lambda has different arity
(<code>C417</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/15802">#15802</a>)</li>
<li>[<code>flake8-comprehensions</code>] Parenthesize
<code>sorted</code> when needed for
<code>unnecessary-call-around-sorted</code> (<code>C413</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/15825">#15825</a>)</li>
<li>[<code>pyupgrade</code>] Handle end-of-line comments for
<code>quoted-annotation</code> (<code>UP037</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/15824">#15824</a>)</li>
</ul>
<h3>Documentation</h3>
<ul>
<li>Add missing config docstrings (<a
href="https://redirect.github.com/astral-sh/ruff/pull/15803">#15803</a>)</li>
<li>Add references to <code>trio.run_process</code> and
<code>anyio.run_process</code> (<a
href="https://redirect.github.com/astral-sh/ruff/pull/15761">#15761</a>)</li>
<li>Use <code>uv init --lib</code> in tutorial (<a
href="https://redirect.github.com/astral-sh/ruff/pull/15718">#15718</a>)</li>
</ul>
<h2>Contributors</h2>
<ul>
<li><a
href="https://github.com/AlexWaygood"><code>@​AlexWaygood</code></a></li>
<li><a
href="https://github.com/Garrett-R"><code>@​Garrett-R</code></a></li>
<li><a
href="https://github.com/InSyncWithFoo"><code>@​InSyncWithFoo</code></a></li>
<li><a
href="https://github.com/JelleZijlstra"><code>@​JelleZijlstra</code></a></li>
<li><a href="https://github.com/Lee-W"><code>@​Lee-W</code></a></li>
<li><a
href="https://github.com/MichaReiser"><code>@​MichaReiser</code></a></li>
<li><a
href="https://github.com/charliermarsh"><code>@​charliermarsh</code></a></li>
<li><a
href="https://github.com/dcreager"><code>@​dcreager</code></a></li>
<li><a
href="https://github.com/dhruvmanila"><code>@​dhruvmanila</code></a></li>
<li><a href="https://github.com/dylwil3"><code>@​dylwil3</code></a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md">ruff's
changelog</a>.</em></p>
<blockquote>
<h2>0.9.4</h2>
<h3>Preview features</h3>
<ul>
<li>[<code>airflow</code>] Extend airflow context parameter check for
<code>BaseOperator.execute</code> (<code>AIR302</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/15713">#15713</a>)</li>
<li>[<code>airflow</code>] Update <code>AIR302</code> to check for
deprecated context keys (<a
href="https://redirect.github.com/astral-sh/ruff/pull/15144">#15144</a>)</li>
<li>[<code>flake8-bandit</code>] Permit suspicious imports within stub
files (<code>S4</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/15822">#15822</a>)</li>
<li>[<code>pylint</code>] Do not trigger <code>PLR6201</code> on empty
collections (<a
href="https://redirect.github.com/astral-sh/ruff/pull/15732">#15732</a>)</li>
<li>[<code>refurb</code>] Do not emit diagnostic when loop variables are
used outside loop body (<code>FURB122</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/15757">#15757</a>)</li>
<li>[<code>ruff</code>] Add support for more <code>re</code> patterns
(<code>RUF055</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/15764">#15764</a>)</li>
<li>[<code>ruff</code>] Check for shadowed <code>map</code> before
suggesting fix (<code>RUF058</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/15790">#15790</a>)</li>
<li>[<code>ruff</code>] Do not emit diagnostic when all arguments to
<code>zip()</code> are variadic (<code>RUF058</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/15744">#15744</a>)</li>
<li>[<code>ruff</code>] Parenthesize fix when argument spans multiple
lines for <code>unnecessary-round</code> (<code>RUF057</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/15703">#15703</a>)</li>
</ul>
<h3>Rule changes</h3>
<ul>
<li>Preserve quote style in generated code (<a
href="https://redirect.github.com/astral-sh/ruff/pull/15726">#15726</a>,
<a
href="https://redirect.github.com/astral-sh/ruff/pull/15778">#15778</a>,
<a
href="https://redirect.github.com/astral-sh/ruff/pull/15794">#15794</a>)</li>
<li>[<code>flake8-bugbear</code>] Exempt <code>NewType</code> calls
where the original type is immutable (<code>B008</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/15765">#15765</a>)</li>
<li>[<code>pylint</code>] Honor banned top-level imports by
<code>TID253</code> in <code>PLC0415</code>. (<a
href="https://redirect.github.com/astral-sh/ruff/pull/15628">#15628</a>)</li>
<li>[<code>pyupgrade</code>] Ignore <code>is_typeddict</code> and
<code>TypedDict</code> for <code>deprecated-import</code>
(<code>UP035</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/15800">#15800</a>)</li>
</ul>
<h3>CLI</h3>
<ul>
<li>Fix formatter warning message for <code>flake8-quotes</code> option
(<a
href="https://redirect.github.com/astral-sh/ruff/pull/15788">#15788</a>)</li>
<li>Implement tab autocomplete for <code>ruff config</code> (<a
href="https://redirect.github.com/astral-sh/ruff/pull/15603">#15603</a>)</li>
</ul>
<h3>Bug fixes</h3>
<ul>
<li>[<code>flake8-comprehensions</code>] Do not emit
<code>unnecessary-map</code> diagnostic when lambda has different arity
(<code>C417</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/15802">#15802</a>)</li>
<li>[<code>flake8-comprehensions</code>] Parenthesize
<code>sorted</code> when needed for
<code>unnecessary-call-around-sorted</code> (<code>C413</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/15825">#15825</a>)</li>
<li>[<code>pyupgrade</code>] Handle end-of-line comments for
<code>quoted-annotation</code> (<code>UP037</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/15824">#15824</a>)</li>
</ul>
<h3>Documentation</h3>
<ul>
<li>Add missing config docstrings (<a
href="https://redirect.github.com/astral-sh/ruff/pull/15803">#15803</a>)</li>
<li>Add references to <code>trio.run_process</code> and
<code>anyio.run_process</code> (<a
href="https://redirect.github.com/astral-sh/ruff/pull/15761">#15761</a>)</li>
<li>Use <code>uv init --lib</code> in tutorial (<a
href="https://redirect.github.com/astral-sh/ruff/pull/15718">#15718</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="854ab03078"><code>854ab03</code></a>
Bump version to 0.9.4 (<a
href="https://redirect.github.com/astral-sh/ruff/issues/15831">#15831</a>)</li>
<li><a
href="b0b8b06241"><code>b0b8b06</code></a>
Remove semicolon after TypeScript interface definition (<a
href="https://redirect.github.com/astral-sh/ruff/issues/15827">#15827</a>)</li>
<li><a
href="451f251a31"><code>451f251</code></a>
[red-knot] Clarify behavior when redeclaring base class attributes (<a
href="https://redirect.github.com/astral-sh/ruff/issues/15826">#15826</a>)</li>
<li><a
href="13cf3e65f1"><code>13cf3e6</code></a>
[<code>flake8-comprehensions</code>] Parenthesize <code>sorted</code>
when needed for `unnecessary-...</li>
<li><a
href="56f956a238"><code>56f956a</code></a>
[<code>pyupgrade</code>] Handle end-of-line comments for
<code>quoted-annotation</code> (<code>UP037</code>) (...</li>
<li><a
href="7a10a40b0d"><code>7a10a40</code></a>
[<code>flake8-bandit</code>] Permit suspicious imports within stub files
(<code>S4</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/issues/15822">#15822</a>)</li>
<li><a
href="3125332ec1"><code>3125332</code></a>
[red-knot] Format mdtest snippets with the latest version of black (<a
href="https://redirect.github.com/astral-sh/ruff/issues/15819">#15819</a>)</li>
<li><a
href="15d886a502"><code>15d886a</code></a>
[red-knot] Consider all definitions after terminal statements
unreachable (<a
href="https://redirect.github.com/astral-sh/ruff/issues/1">#1</a>...</li>
<li><a
href="e1c9d10863"><code>e1c9d10</code></a>
[<code>flake8-comprehensions</code>] Do not emit
<code>unnecessary-map</code> diagnostic when lambd...</li>
<li><a
href="23c98849fc"><code>23c9884</code></a>
Preserve quotes in generated f-strings (<a
href="https://redirect.github.com/astral-sh/ruff/issues/15794">#15794</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/astral-sh/ruff/compare/0.9.3...0.9.4">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=ruff&package-manager=pip&previous-version=0.9.3&new-version=0.9.4)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-02-04 10:55:27 -08:00
Jian Chen
b2560a75cf
Update react-native to 0.72 (#23509)
…-andriod-e2e-test-job.yml

### Description
<!-- Describe your changes. -->



### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
2025-02-04 09:53:20 -08:00
Yulong Wang
faee9125fb
[js] update JavaScript API to support QNN EP options (#23486)
### Description

As a pre-requisite of #23468
2025-02-03 17:38:50 -08:00