### Description
Update CUDA ArgMin/ArgMax op kernels to have end version 11 since opset
12+ is not supported yet.
With the way these kernels are currently registered, the documentation
shows support for opset 11+. This is not accurate.
### Motivation and Context
Fix#13781
This PR Registers the following operators for opset 16 to the DML EP:
- LeakyRelu-16
- PRelu-16
- Where-16
- GreaterOrEqual-16
- LessOrEqual-16
Identity-16 was not added in this PR due to pipeline failures
Co-authored-by: Numfor Mbiziwo-Tiapo <numform@microsoft.com>
### Description
<!-- Describe your changes. -->
### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
### Description
Numpy1.24.0 removed the np.float.
```
/opt/hostedtoolcache/Python/3.8.15/x64/bin/python onnxruntime_test_python_sparse_matmul.py
EE.
======================================================================
ERROR: testRunContribSparseMatMul (__main__.TestSparseToDenseMatmul)
Mutliple sparse COO tensor to dense
----------------------------------------------------------------------
Traceback (most recent call last):
File "onnxruntime_test_python_sparse_matmul.py", line 407, in testRunContribSparseMatMul
np.float,
File "/opt/hostedtoolcache/Python/3.8.15/x64/lib/python3.8/site-packages/numpy/__init__.py", line 284, in __getattr__
raise AttributeError("module {!r} has no attribute "
AttributeError: module 'numpy' has no attribute 'float'
======================================================================
ERROR: testRunSparseOutputOnly (__main__.TestSparseToDenseMatmul)
Try running models using the new run_with_ort_values
----------------------------------------------------------------------
Traceback (most recent call last):
File "onnxruntime_test_python_sparse_matmul.py", line 39, in testRunSparseOutputOnly
values = np.array([1.764052391052246, 0.40015721321105957, 0.978738009929657], np.float)
File "/opt/hostedtoolcache/Python/3.8.15/x64/lib/python3.8/site-packages/numpy/__init__.py", line 284, in __getattr__
raise AttributeError("module {!r} has no attribute "
AttributeError: module 'numpy' has no attribute 'float'
```
### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
### Description
<!-- Describe your changes. -->
### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
### Description
Use target name for flatbuffers.
Add version range for flatbuffers. It is similar to #13870
### Motivation and Context
To fix a build error:
```
CMake Error at onnxruntime_graph.cmake:88 (add_dependencies):
The dependency target "flatbuffers" of target "onnxruntime_graph" does not
exist.
Call Stack (most recent call first):
CMakeLists.txt:1490 (include)
```
It happens when flatbuffers library is already installed. For example,
on Ubuntu people may get it from apt-get. But, the one provided by
Ubuntu 20.04 is not compatible with our code. The one in Ubuntu 22.04
works fine.
### Description
Update absl to a new version
### Motivation and Context
The new version contains fixes that are needed for Nvidia GPU build.
Once we update it to that version, we don't need to maintain our private
patches for Nvidia GPU build.
### Description
Explore the possible re-use of the logits buffer in `GreedySearch` for
cases where sequence length == 1 (Post the first decoding run, the
sequence length is guaranteed to be 1). This re-use will ensure that we
do not have to make copies of the logits before processing them.
Currently, we make a copy of the logits even if the sequence length == 1
which is not necessary as we can directly re-use the logits buffer for
the token generation step. A similar optimization exists in
`BeamSearch`, but seems lacking in `GreedySearch`. Since, the logits
buffer may contain padded data, we need to adjust the pieces consuming
the logits buffer directly to account for any padding.
A more invasive change (needs changes in a few places) will be to adjust
the interfaces of `ProcessLogits()` such that it takes a reference to
the logits and not a const reference as (based on my understanding) this
is the only place where the logits from the decoder subgraph will ever
be used and giving the `ProcessLogits()` method license to
mutate/process the underlying buffer of the logits OrtValue seems
reasonable (instead of making a copy and then mutating/processing them).
The will also remove the ugly `const_cast`(s) seen in this change.
### Description
MLAS QGEMM kernel need memory buffer for packing of source tensors. This
change moves these buffers from stack to heap
### Motivation and Context
MLAS QGEMM kernels have packing buffers on the stack since the beginning
of time. Emerging hardware demands larger and larger buffers, causing
potential stack overflow problems down the road. This change moves these
buffers from stack to the heap.
This change also introduces a thread initializer per kernel. For
instance, in the new AMX instruction set (support coming), we need to
initialize the tile registers per thread. This requirement can be easily
satisfied by tapping into this change.
Co-authored-by: Chen Fu <fuchen@microsoft.com>
Implement reuse kv_cache past and present tensor in Attention Ops.
Unit test for abover feature.
Utilize the reuse kv_cache for past and present tensor in Greedy Search.
Correctness test for it.
Co-authored-by: Zhang Lei <phill.zhang@gmail.com>
Fix error: builtin __has_trivial_destructor is deprecated; use __is_trivially_destructible instead [-Werror,-Wdeprecated-builtins]
This is not a clean fix as in 13783, users will need to manually set `CMAKE_HIP_FLAGS="-Wno-deprecated-builtins"` if they want to use self-built hipclang combining with ROCm 5.3.* or older.
### Description
update versions of a few build dependencies for onnxruntime NPM
packages.
update nodejs version to v16.x in linux CI. v12 is too out-of-dated. see
[nodejs release
schedule](https://github.com/nodejs/release#release-schedule)
### Motivation and Context
- upgrade to latest webpack allows using of latest Node.js LTS version.
previous version of webpack does not work on Node.js v18 and it is fixed
in latest version
- upgrade to latest typescript, ts-loader and other dev deps to
accelerate the build and bundling.
- upgrade also helps to resolve security warnings that may be vulnerable
in out-of-dated version
### Description
Add the ability to run graph
### Motivation and Context
A brief description is as follows:
1) If the whole graph is supported, then will be processed by the graph
engine, directly.
2) If the whole graph is not supported, the whole graph will be divided
into subgraphs and single operators; The sub-graphs will be run on graph
engine, and the single operators will fallback to the traditional mode.
### Description
<!-- Describe your changes. -->
### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
### Description
<!-- Describe your changes. -->
Add ability to set RunOptions config entries. Largely a cut-and-paste of
the existing code for setting SessionOptions config entries.
### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
#13936
### Description
For compilation in container, ADO Cache task doesn't work directly.
The workaround is to mount the cache directory to the container, and let
CCache in container to read/write cache data.
In short, we just leverage ADO API to download/upload cache data.
The Post-jobs works in stack-mode, So the PostBuildCleanUp Tasks should
be defined first.
Thus, The PostBuildCleanUp would be executed lastly.
Else, Cache Task would fail to upload cache because the Agent Directory
is cleaned.
Comment out the affinity-setting logic which introduced an unnecessary
binary size increase for the minimal build.
Co-authored-by: Randy Shuai <rashuai@microsoft.com>
There are some issues in
https://github.com/microsoft/onnxruntime/pull/13015,
1. Model name should be used rather than graph name in the model ID
generator.
2. Hash collision is observed in ID cache, which means different model
may have the same key and thus load same hash id from the cache.
3. For the class and function that generate model id, MetaDef in the
name is not appropriate.
4. Should reuse murmurhash3 rather than copy it over to TRT EP
This PR fixes those issues.
### Description
QAttention performance improvement when hardware supports amx and
avx-bf16 execution.
### Motivation and Context
- Streamlined the code to dynamically switch between BF16 and FP32
execution as and when supported by hardware
- Split QKV memory into three different memories for Q, K, and V. This
helps to run QAttention on GPU and take advantage of parallel
processing.
- This change has shown a significant amount of performance gain for
QAttention operator on hardware like Sapphire Rapids which supports amx
and avx-bf16.
### Description
Change the default value of cudnn_conv_use_max_workspace to be consistent with ORT Training:
Test results with stable diffusion 1.4:
Latency (Seconds per Query) | T4 | V100 | A100
-- | -- | -- | --
ORT FP32 (Before) | 28.4 | 10.1 | 7.2
ORT FP32 (After) | 26.2 | 8.3 | 4.9
Gain | 8% | 18% | 32%
Latency (Seconds per Query) | T4 | V100 | A100
-- | -- | -- | --
ORT FP16 (Before) | 13.1 | 6.4 | 4.3
ORT FP16 (After) | 9.6 | 3.8 | 2.4
Gain | 27% | 41% | 44%
We can see that there is significant gain after changing the default value. Normal user might not have knowledge for this. It is better to change the default value so that user can get best performance out of box.
**Description**: This PR including following works:
1. provide stream and related synchronization abstractions in
onnxruntime.
2. enhance onnxruntime's execution planner / executor / memory arena to
support execute multiple streams in parallel.
3. deprecate the parallel executor for cpu.
4. deprecate the Fence mechanism.
5. update the cuda / tensorrt EP to support the stream mechanism,
support running different request in different cuda stream.
**Motivation and Context**
- Why is this change required?
currently, the execution plan is just a linear list of those primitives,
ort will execute them step by step. For any given graph, ORT will
serialize it to a fixed execution order. This sequential execution
design simplifies most scenarios, but it has the following limitations:
1. it is difficult to enable inter-node parallelization, we have a
half-baked parallel executor but it is very difficult to make it work
with GPU.
2. The fence mechanism can work with single gpu stream + cpu thread
case, but when extend to multiple stream, it is difficult to manage the
cross GPU stream synchronizations.
3. our cuda EP rely on the BFCArena to make the memory management work
with the GPU async kernels, but current BFCArena is not aware of the
streams, so it doesn't behavior correctly when run with multiple
streams.
This PR enhance our existing execution plan and executor to support
multiple stream execution. we use an unified algorithm to mange both
single stream and multiple stream scenarios.
This PR mainly focus on the infrastructure support for multiple stream
execution, that is said, given a valid stream assignment, onnxruntime
can execute it correctly. How to generate a good stream assignment for a
given model will be in the future PR.
Co-authored-by: Cheng Tang <chenta@microsoft.com@orttrainingdev9.d32nl1ml4oruzj4qz3bqlggovf.px.internal.cloudapp.net>
Co-authored-by: Cheng Tang <chenta@microsoft.com>
Co-authored-by: RandySheriffH <48490400+RandySheriffH@users.noreply.github.com>
Co-authored-by: Randy Shuai <rashuai@microsoft.com>
Co-authored-by: cao lei <jslhcl@gmail.com>
Co-authored-by: Lei Cao <leca@microsoft.com>
### Description
<!-- Describe your changes. -->
### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
Co-authored-by: Edward Chen <18449977+edgchen1@users.noreply.github.com>
### Description
Fix a problem: macOS CI pipeline doesn't run tests. It is due a code
refactoring I recently made.
### Motivation and Context
Add the tests back.
### Description
This PR adds support for `float64` kernels in the latest versions of
operators: Floor, Ceil and IsNaN.
### Motivation and Context
The lack of these kernels is non-trivial to work around and easily lead
to performance losses when it is attempted. When equivalence with an
existing implementation is required, precision is easily lost when
casting to `float32` instead.
IsNaN is common when cleaning up data in an ML pipeline. Floor and Ceil
have uses for discretising values and single-precision floats are
insufficient to round well when values get larger than a few million.
According to my measurement this only increases the binary size by a few
kilobytes (on the Python wheel of RelWithDebInfo).
Closes#13673 (Round already has float64 support)
Partially solves #8791 (Looks like there's parallel issues/PR open for
Split, but it is also hard to work around and hence useful)
Signed-off-by: jbachurski <kbachurski@gmail.com>
### Description
Allow Tensor to be scalar if it is not per channel.
### Motivation and Context
[<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here.
-->](https://github.com/microsoft/onnxruntime/issues/13915)
Integrate TensorRT 8.5
- Update TensorRT EP to support TensorRT 8.5
- Update relevant CI pipelines
- Disable known non-supported ops for TensorRT
- Make timeout configurable.
We observe more than [20
hours](https://aiinfra.visualstudio.com/Lotus/_build/results?buildId=256729&view=logs&j=71ce39d8-054f-502a-dcd0-e89fa9931f40)
of running unit tests with TensorRT 8.5 in package pipelines. Because we
can't use placeholder to significantly reduce testing time (c-api
application test will deadlock) in package pipelines, we only run
subsets of model tests and unit tests that are related to TRT (add new
build flag--test_all_timeout and set it to 72000 seconds by package
pipelines). Just to remember, we still run all the tests in TensorRT CI
pipelines to have full test coverage.
- include https://github.com/microsoft/onnxruntime/pull/13918 to fix
onnx-tensorrt compile error.
Co-authored-by: George Wu <jywu@microsoft.com>
### Description
Fix usage of enable_training_ops and reduce ifdef complexity for
training builds.
### Motivation and Context
This is the second refactoring PR towards creating a dedicated build for
on device training. This PR aims to reduce some complexity. We can set
ENABLE_TRAINING_OPS in cmake when either ENABLE_TRAINING or
ENABLE_TRAINING_ON_DEVICE is selected, this way we dont have to use if
defined(ENABLE_TRAINING) || defined(ENABLE_TRAINING_ON_DEVICE )
everywhere in the code.
- If it fixes an open issue, please link to the issue here. -->
GetCpuPreferredNodes is a function to get CPU preferred nodes from a
graph for target EP (such as CUDA). It starts from CPU outputs of target
EP node and travel the graph and try to fallback tentative nodes from
target EP to CPU EP.
For example: Shape->Gather->Concat->Reshape, at the beginning, all these
4 nodes are all tentative nodes. Since output of Shape is CPU output, it
starts from that output and travel the graph, and fallback Gather and
Concat to CPU EP. Reshape cannot fallback because its another input is
not CPU input.
But for case: Shape->Gather->ReduceProd->Concat->Reshape, since
ReduceProd doesn't have int64_t kernel in target EP (CUDA here), so it's
not a tentative node. The travelling logic still starts from Shape's
output, but with current logic, it will stop when reaching ReduceProd,
so that Concat will not fallback at the end and is assigned with target
EP, at the end, Memcpy nodes are added before and after the Concat node
because both of its input and output are CPU tensors.
This PR is to fix this issue. For above case, since ReduceProd is not a
tentative node, it means either is already have EP assigned, or there is
no kernel found of target EP for it, so we can still continue the graph
travelling and make it a CPU node and all its outputs CPU outputs.
### Description
<!-- Describe your changes. -->
Sort kernel explorer profile result, the instance is sorted according to
the performance.
1. Set sort kernel as an optional config when we pass parameters through
commandline.
`python gemm_test.py N N float16 M N K` disable sort by default, add
`--sort` to enable sort.
2. 'python gemm_test.py' enable sort by default.
### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
Co-authored-by: peixuanzuo <peixuanzuo@linmif39a000004.zvflicr54joexhdgnhvmxrxygg.phxx.internal.cloudapp.net>
The float16.h header is shared between the CPU and ROCm EPs. The
USE_ROCM macro is defined universally, but for the float16.h header we
only wish to detect the hip-clang compiler. Otherwise, the CPU EP fails
to build because of -Werror -Wuninitialized caused by the USE_ROCM code
additions, and the CPU EP should be using a different code path.
### Description
Fix prefast warning in LayerNorm ops
### Motivation and Context
Prefast complains that we should upcast before subtracting because
otherwise an overflow (or underflow) could happen. So we add these casts
to appease it.
Copying the right files according to the build documentation.
The bug originated to address a run break under some machines (needed
threaded SIMD instead of only threaded), analysis ongoing.
Bumps [engine.io](https://github.com/socketio/engine.io) and
[socket.io](https://github.com/socketio/socket.io). These dependencies
needed to be updated together.
Updates `engine.io` from 6.1.3 to 6.2.1
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/socketio/engine.io/releases">engine.io's
releases</a>.</em></p>
<blockquote>
<h2>6.2.1</h2>
<p>⚠️ This release contains an important security fix
⚠️</p>
<p>A malicious client could send a specially crafted HTTP request,
triggering an uncaught exception and killing the Node.js process:</p>
<pre><code>Error: read ECONNRESET
at TCP.onStreamRead (internal/stream_base_commons.js:209:20)
Emitted 'error' event on Socket instance at:
at emitErrorNT (internal/streams/destroy.js:106:8)
at emitErrorCloseNT (internal/streams/destroy.js:74:3)
at processTicksAndRejections (internal/process/task_queues.js:80:21) {
errno: -104,
code: 'ECONNRESET',
syscall: 'read'
}
</code></pre>
<p>Please upgrade as soon as possible.</p>
<h3>Bug Fixes</h3>
<ul>
<li>catch errors when destroying invalid upgrades (<a
href="https://github-redirect.dependabot.com/socketio/engine.io/issues/658">#658</a>)
(<a
href="425e833ab1">425e833</a>)</li>
</ul>
<h2>6.2.0</h2>
<h2>Features</h2>
<ul>
<li>add the "maxPayload" field in the handshake details (<a
href="088dcb4dff">088dcb4</a>)</li>
</ul>
<p>So that clients in HTTP long-polling can decide how many packets they
have to send to stay under the maxHttpBufferSize
value.</p>
<p>This is a backward compatible change which should not mandate a new
major revision of the protocol (we stay in v4), as
we only add a field in the JSON-encoded handshake data:</p>
<pre><code>0{"sid":"lv_VI97HAXpY6yYWAAAC","upgrades":["websocket"],"pingInterval":25000,"pingTimeout":5000,"maxPayload":1000000}
</code></pre>
<h4>Links</h4>
<ul>
<li>Diff: <a
href="https://github.com/socketio/engine.io/compare/6.1.3...6.2.0">https://github.com/socketio/engine.io/compare/6.1.3...6.2.0</a></li>
<li>Client release: <a
href="https://github.com/socketio/engine.io-client/releases/tag/6.2.0">6.2.0</a></li>
<li>ws version: <a
href="https://github.com/websockets/ws/releases/tag/8.2.3">~8.2.3</a></li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/socketio/engine.io/blob/main/CHANGELOG.md">engine.io's
changelog</a>.</em></p>
<blockquote>
<h2><a
href="https://github.com/socketio/engine.io/compare/6.2.0...6.2.1">6.2.1</a>
(2022-11-20)</h2>
<p>⚠️ This release contains an important security fix
⚠️</p>
<p>A malicious client could send a specially crafted HTTP request,
triggering an uncaught exception and killing the Node.js process:</p>
<pre><code>Error: read ECONNRESET
at TCP.onStreamRead (internal/stream_base_commons.js:209:20)
Emitted 'error' event on Socket instance at:
at emitErrorNT (internal/streams/destroy.js:106:8)
at emitErrorCloseNT (internal/streams/destroy.js:74:3)
at processTicksAndRejections (internal/process/task_queues.js:80:21) {
errno: -104,
code: 'ECONNRESET',
syscall: 'read'
}
</code></pre>
<p>Please upgrade as soon as possible.</p>
<h3>Bug Fixes</h3>
<ul>
<li>catch errors when destroying invalid upgrades (<a
href="https://github-redirect.dependabot.com/socketio/engine.io/issues/658">#658</a>)
(<a
href="425e833ab1">425e833</a>)</li>
</ul>
<h1><a
href="https://github.com/socketio/engine.io/compare/3.5.0...3.6.0">3.6.0</a>
(2022-06-06)</h1>
<h3>Bug Fixes</h3>
<ul>
<li>add extension in the package.json main entry (<a
href="https://github-redirect.dependabot.com/socketio/engine.io/issues/608">#608</a>)
(<a
href="3ad0567dbd">3ad0567</a>)</li>
<li>do not reset the ping timer after upgrade (<a
href="1f5d469986">1f5d469</a>),
closes <a
href="https://github-redirect.dependabot.com//github-redirect.dependabot.com/socketio/socket.io-client-swift/pull/1309/issues/issuecomment-768475704">socketio/socket.io-client-swift#1309</a></li>
</ul>
<h3>Features</h3>
<ul>
<li>decrease the default value of maxHttpBufferSize (<a
href="58e274c437">58e274c</a>)</li>
</ul>
<p>This change reduces the default value from 100 mb to a more sane 1
mb.</p>
<p>This helps protect the server against denial of service attacks by
malicious clients sending huge amounts of data.</p>
<p>See also: <a
href="https://github.com/advisories/GHSA-j4f2-536g-r55m">https://github.com/advisories/GHSA-j4f2-536g-r55m</a></p>
<ul>
<li>increase the default value of pingTimeout (<a
href="f55a79a28a">f55a79a</a>)</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="24b847be6a"><code>24b847b</code></a>
chore(release): 6.2.1</li>
<li><a
href="425e833ab1"><code>425e833</code></a>
fix: catch errors when destroying invalid upgrades (<a
href="https://github-redirect.dependabot.com/socketio/engine.io/issues/658">#658</a>)</li>
<li><a
href="99adb00ba1"><code>99adb00</code></a>
chore(deps): bump xmlhttprequest-ssl and engine.io-client in
/examples/latenc...</li>
<li><a
href="d196f6a6b7"><code>d196f6a</code></a>
chore(deps): bump minimatch from 3.0.4 to 3.1.2 (<a
href="https://github-redirect.dependabot.com/socketio/engine.io/issues/660">#660</a>)</li>
<li><a
href="7c1270f98c"><code>7c1270f</code></a>
chore(deps): bump nanoid from 3.1.25 to 3.3.1 (<a
href="https://github-redirect.dependabot.com/socketio/engine.io/issues/659">#659</a>)</li>
<li><a
href="535a01d889"><code>535a01d</code></a>
ci: add Node.js 18 in the test matrix</li>
<li><a
href="1b71a6f5cb"><code>1b71a6f</code></a>
docs: remove "Vanilla JS" highlight from README (<a
href="https://github-redirect.dependabot.com/socketio/engine.io/issues/656">#656</a>)</li>
<li><a
href="917d1d29e1"><code>917d1d2</code></a>
refactor: replace deprecated <code>String.prototype.substr()</code> (<a
href="https://github-redirect.dependabot.com/socketio/engine.io/issues/646">#646</a>)</li>
<li><a
href="020801ab8c"><code>020801a</code></a>
chore: add changelog for version 3.6.0</li>
<li><a
href="ed1d6f912c"><code>ed1d6f9</code></a>
test: make test script work on Windows (<a
href="https://github-redirect.dependabot.com/socketio/engine.io/issues/643">#643</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/socketio/engine.io/compare/6.1.3...6.2.1">compare
view</a></li>
</ul>
</details>
<br />
Updates `socket.io` from 4.4.1 to 4.5.3
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/socketio/socket.io/releases">socket.io's
releases</a>.</em></p>
<blockquote>
<h2>4.5.3</h2>
<h3>Bug Fixes</h3>
<ul>
<li><strong>typings:</strong> accept an HTTP2 server in the constructor
(<a
href="d3d0a2d5be">d3d0a2d</a>)</li>
<li><strong>typings:</strong> apply types to
"io.timeout(...).emit()" calls (<a
href="e357daf585">e357daf</a>)</li>
</ul>
<h4>Links:</h4>
<ul>
<li>Diff: <a
href="https://github.com/socketio/socket.io/compare/4.5.2...4.5.3">https://github.com/socketio/socket.io/compare/4.5.2...4.5.3</a></li>
<li>Client release: <a
href="https://github.com/socketio/socket.io-client/releases/tag/4.5.3">4.5.3</a></li>
<li>engine.io version: <code>~6.2.0</code></li>
<li>ws version: <code>~8.2.3</code></li>
</ul>
<h2>4.5.2</h2>
<h3>Bug Fixes</h3>
<ul>
<li>prevent the socket from joining a room after disconnection (<a
href="18f3fdab12">18f3fda</a>)</li>
<li><strong>uws:</strong> prevent the server from crashing after upgrade
(<a
href="ba497ee3eb">ba497ee</a>)</li>
</ul>
<h4>Links:</h4>
<ul>
<li>Diff: <a
href="https://github.com/socketio/socket.io/compare/4.5.1...4.5.2">https://github.com/socketio/socket.io/compare/4.5.1...4.5.2</a></li>
<li>Client release: <a
href="https://github.com/socketio/socket.io-client/releases/tag/4.5.2">4.5.2</a></li>
<li>engine.io version: <code>~6.2.0</code></li>
<li>ws version: <code>~8.2.3</code></li>
</ul>
<h2>4.5.1</h2>
<h3>Bug Fixes</h3>
<ul>
<li>forward the local flag to the adapter when using fetchSockets() (<a
href="30430f0985">30430f0</a>)</li>
<li><strong>typings:</strong> add HTTPS server to accepted types (<a
href="https://github-redirect.dependabot.com/socketio/socket.io/issues/4351">#4351</a>)
(<a
href="9b43c9167c">9b43c91</a>)</li>
</ul>
<h4>Links:</h4>
<ul>
<li>Diff: <a
href="https://github.com/socketio/socket.io/compare/4.5.0...4.5.1">https://github.com/socketio/socket.io/compare/4.5.0...4.5.1</a></li>
<li>Client release: <a
href="https://github.com/socketio/socket.io-client/releases/tag/4.5.1">4.5.1</a></li>
<li>engine.io version: <code>~6.2.0</code></li>
<li>ws version: <code>~8.2.3</code></li>
</ul>
<h2>4.5.0</h2>
<h3>Bug Fixes</h3>
<ul>
<li><strong>typings:</strong> ensure compatibility with TypeScript 3.x
(<a
href="https://github-redirect.dependabot.com/socketio/socket.io/issues/4259">#4259</a>)
(<a
href="02c87a8561">02c87a8</a>)</li>
</ul>
<h3>Features</h3>
<ul>
<li>add support for catch-all listeners for outgoing packets (<a
href="531104d332">531104d</a>)</li>
</ul>
<p>This is similar to <code>onAny()</code>, but for outgoing
packets.</p>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/socketio/socket.io/blob/main/CHANGELOG.md">socket.io's
changelog</a>.</em></p>
<blockquote>
<h2><a
href="https://github.com/socketio/socket.io/compare/4.5.2...4.5.3">4.5.3</a>
(2022-10-15)</h2>
<h3>Bug Fixes</h3>
<ul>
<li><strong>typings:</strong> accept an HTTP2 server in the constructor
(<a
href="d3d0a2d5be">d3d0a2d</a>)</li>
<li><strong>typings:</strong> apply types to
"io.timeout(...).emit()" calls (<a
href="e357daf585">e357daf</a>)</li>
</ul>
<h2><a
href="https://github.com/socketio/socket.io/compare/4.5.1...4.5.2">4.5.2</a>
(2022-09-02)</h2>
<h3>Bug Fixes</h3>
<ul>
<li>prevent the socket from joining a room after disconnection (<a
href="18f3fdab12">18f3fda</a>)</li>
<li><strong>uws:</strong> prevent the server from crashing after upgrade
(<a
href="ba497ee3eb">ba497ee</a>)</li>
</ul>
<h1><a
href="https://github.com/socketio/socket.io/compare/2.4.1...2.5.0">2.5.0</a>
(2022-06-26)</h1>
<h3>Bug Fixes</h3>
<ul>
<li>fix race condition in dynamic namespaces (<a
href="05e1278cfa">05e1278</a>)</li>
<li>ignore packet received after disconnection (<a
href="22d4bdf00d">22d4bdf</a>)</li>
<li>only set 'connected' to true after middleware execution (<a
href="226cc16165">226cc16</a>)</li>
<li>prevent the socket from joining a room after disconnection (<a
href="f223178eb6">f223178</a>)</li>
</ul>
<h2><a
href="https://github.com/socketio/socket.io/compare/4.5.0...4.5.1">4.5.1</a>
(2022-05-17)</h2>
<h3>Bug Fixes</h3>
<ul>
<li>forward the local flag to the adapter when using fetchSockets() (<a
href="30430f0985">30430f0</a>)</li>
<li><strong>typings:</strong> add HTTPS server to accepted types (<a
href="https://github-redirect.dependabot.com/socketio/socket.io/issues/4351">#4351</a>)
(<a
href="9b43c9167c">9b43c91</a>)</li>
</ul>
<h1><a
href="https://github.com/socketio/socket.io/compare/4.4.1...4.5.0">4.5.0</a>
(2022-04-23)</h1>
<h3>Bug Fixes</h3>
<ul>
<li><strong>typings:</strong> ensure compatibility with TypeScript 3.x
(<a
href="https://github-redirect.dependabot.com/socketio/socket.io/issues/4259">#4259</a>)
(<a
href="02c87a8561">02c87a8</a>)</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="945c84be47"><code>945c84b</code></a>
chore(release): 4.5.3</li>
<li><a
href="d3d0a2d5be"><code>d3d0a2d</code></a>
fix(typings): accept an HTTP2 server in the constructor</li>
<li><a
href="19b225b0c8"><code>19b225b</code></a>
docs(examples): update dependencies of the basic CRUD example</li>
<li><a
href="8fae95dd18"><code>8fae95d</code></a>
docs: add jsdoc for each public method</li>
<li><a
href="e6f6b906db"><code>e6f6b90</code></a>
docs: add deprecation notice for the allSockets() method</li>
<li><a
href="596eb88af7"><code>596eb88</code></a>
ci: upgrade to actions/checkout@3 and actions/setup-node@3</li>
<li><a
href="e357daf585"><code>e357daf</code></a>
fix(typings): apply types to "io.timeout(...).emit()"
calls</li>
<li><a
href="10fa4a2690"><code>10fa4a2</code></a>
refactor: add list of possible disconnection reasons</li>
<li><a
href="8be95b3bd3"><code>8be95b3</code></a>
chore(release): 4.5.2</li>
<li><a
href="ba497ee3eb"><code>ba497ee</code></a>
fix(uws): prevent the server from crashing after upgrade</li>
<li>Additional commits viewable in <a
href="https://github.com/socketio/socket.io/compare/4.4.1...4.5.3">compare
view</a></li>
</ul>
</details>
<br />
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
- `@dependabot use these labels` will set the current labels as the
default for future PRs for this repo and language
- `@dependabot use these reviewers` will set the current reviewers as
the default for future PRs for this repo and language
- `@dependabot use these assignees` will set the current assignees as
the default for future PRs for this repo and language
- `@dependabot use this milestone` will set the current milestone as the
default for future PRs for this repo and language
You can disable automated security fix PRs for this repo from the
[Security Alerts
page](https://github.com/microsoft/onnxruntime/network/alerts).
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>