### Description
<!-- Describe your changes. -->
### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
### Description
This PR adds kernel implementation for operator "Not" and "Equal". Also
removed download cache in gpu data manager.
**Why removing download cache**
The following test case failed. ("Or" is on CPU, "Greater" and "Equal"
are on JSEP)

after debugging, I found that both "Equal" and "Greater" are using the
same output GPU Data ID. This is because when ORT executes the graph, it
first run "Equal", allowing its shader to write into GPU Data ID 2; then
a Gpu2Cpu copy for it is issued (because currently "Or" is on CPU EP);
at this point, ORT thinks GPU Data ID=2 is free to use; so it reuse it
as output for "Greater". This means there is no allocation for output of
"Greater" kernel, and both kernel writes to GPU Data ID=2.
For gpu data manager, there will be 2 downloads from the same GPU
buffer. Previously I think this is a waste of resource so I cached the
data. But now it shoes that we need to perform 2 downloads because the
GPU data is already different. The download data cache should be
removed.
### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
### Description
<!-- Describe your changes. -->
### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
### Description
Added JSEP Gemm registration for opset 13. It was falling back to CPU
provider as CPU has it for 13
---------
Co-authored-by: Guenther Schmuelling <guschmue@microsoft.com>
### Description
Add SkipLayerNormalization operator to JSEP.
### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
### Description
<!-- Describe your changes. -->
### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
### Description
Fix some Resize failing tests.
### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
---------
Co-authored-by: Yulong Wang <7679871+fs-eire@users.noreply.github.com>
### Description
Added two kernels for Layer and Instance norm
Also added maximum limits for `maxBufferSize` when requesting GPU device
as by default it's limited to 256mb and it fails allocating 600mb buffer
while running fp32 StableDiffusion weights.
### Motivation and Context
These two are used in StableDiffusion and many other networks
### Description
Added Gather op that works with both i32 and i64 indices, assuming that
values fall into i32 limit. The assumption is safe because it's not
possible to allocate more than 2gb buffer for inputs.
It treats all data from input tensor as u32, copying 1 or 2 elements for
i64, u64 and double.
---------
Co-authored-by: Guenther Schmuelling <guschmue@microsoft.com>
argmax and argmin are similar to reduce. Eventually we need to add
optimized flavors of the shader.
softmax is optimized but only works on the last axis for now which
should be the common use case.
todo: enable more ut for argmax/argmin
### Description
Added Resize NHWC domain kernel registration.
### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
### Description
Implemented Resize operator support in JSEP
### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
### Description
Added Gelu operator to JSEP
### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
### Description
Added Flatten operator support to JSEP.
### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
### Description
Added Slice operator support to JSEP.
### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
### Description
Added Expand operator support.
### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
### Description
Add ConvTranspose support for WebGPU
### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
### Description
Added WeGPU/JSEP Split operator support.
### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
### Description
Add Concat operator
### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
### Description
Added support for ReduceL1, ReduceL2, ReduceMean, ReduceMin, ReduceMax,
ReduceSum, ReduceLogSum, ReduceLogSumExp, ReduceProd and
ReduceSquareSum.
### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
---------
Co-authored-by: Satya Jandhyala <sajandhy@microsoft.com>
Co-authored-by: guschmue <guschmue@microsoft.com>
### Description
This PR adds an implementation of the Squeeze operator to WebGPU JSEP.
The implementation follows the [operator
schema](https://github.com/onnx/onnx/blob/main/docs/Operators.md#Unsqueeze).
To implement the `Unsqueeze` operator in the same fashion as the
`Squeeze`, I added the `ComputeOutputShape()` method to the
`UnsqueezeBase` class and made some slight modifications. Please let me
know if it is a bad idea and if I should move this method to the JS
implementation.
I also uncommented test case lines in the `suite-test-list.jsonc` file
for both Squeeze and Unsqueeze operators following @hariharans29's
[comment](https://github.com/microsoft/onnxruntime/pull/16024#issuecomment-1565113633).
### How was it tested
1. I created a model with only one operator:
```Python
import onnx.helper
node = onnx.helper.make_node(
"Unsqueeze",
inputs=["T", "axes"],
outputs=["y"],
)
graph = onnx.helper.make_graph([node], "test", [onnx.helper.make_tensor_value_info("T", 1, [3, 4, 5]), onnx.helper.make_tensor_value_info("axes", 7, [2])], [onnx.helper.make_tensor_value_info("y", 1, [3, 1, 4, 5, 1])])
onnx.save(onnx.helper.make_model(graph), "unsqueeze.onnx")
```
2. I compiled the runtime using @fs-eire's
[instructions](https://gist.github.com/fs-eire/a55b2c7e10a6864b9602c279b8b75dce).
3. I ran the test models in the browser using this minimal setup:
```HTML
<html>
<script src=".\dist\ort.webgpu.min.js"></script>
<script>
async function run() {
const session = await ort.InferenceSession.create('unsqueeze.onnx', {executionProviders: ['webgpu']});
console.log(session);
const input = new ort.Tensor('float32', new Float32Array(60), [3, 4, 5]);
const dim = new ort.Tensor('int64', [1n, 4n], [2]);
const output = await session.run({ "T": input, "axes": dim });
console.log(output);
}
run();
</script>
</html>
```
### Motivation and Context
Improve operator coverage for WebGPU JSEP.
### Description
This PR adds an implementation of the `Squeeze` operator to WebGPU JSEP.
The implementation follows the [operator
schema](https://github.com/onnx/onnx/blob/main/docs/Operators.md#Squeeze)
and allows one or two inputs.
### How was it tested
1. I created two models. Without `axes`:
```Python
import onnx.helper
node = onnx.helper.make_node(
"Squeeze",
inputs=["T"],
outputs=["y"],
)
graph = onnx.helper.make_graph([node], "test", [onnx.helper.make_tensor_value_info("T", 1, [3, 1, 4, 5])],
[onnx.helper.make_tensor_value_info("y", 1, [3, 4, 5])])
onnx.save(onnx.helper.make_model(graph), "squeeze.onnx")
```
And with `axes`:
```Python
import onnx.helper
node = onnx.helper.make_node(
"Squeeze",
inputs=["T", "axes"],
outputs=["y"],
)
graph = onnx.helper.make_graph([node], "test", [onnx.helper.make_tensor_value_info("T", 1, [3, 1, 4, 5]), onnx.helper.make_tensor_value_info("axes", 7, [1])], [onnx.helper.make_tensor_value_info("y", 1, [3, 4, 5])])
onnx.save(onnx.helper.make_model(graph), "squeeze-dim.onnx")
```
2. I compiled the runtime using @fs-eire's
[instructions](https://gist.github.com/fs-eire/a55b2c7e10a6864b9602c279b8b75dce).
3. I ran the test models in the browser using this minimal setup:
```HTML
<html>
<script src=".\dist\ort.webgpu.min.js"></script>
<script>
async function run() {
const session = await ort.InferenceSession.create('squeeze-dim.onnx', {executionProviders: ['webgpu']});
console.log(session);
const input = new ort.Tensor('float32', new Float32Array(60), [3, 1, 4, 5]);
const dim = new ort.Tensor('int64', [-3n], [1]);
const output = await session.run({ "T": input, "axes": dim });
console.log(output);
}
run();
</script>
</html>
```
### Motivation and Context
Improve operator coverage for WebGPU JSEP.