onnxruntime/js/web/docs/webgl-operators.md
liqun Fu cd7112f800
Integration with ONNX 1.16.0 (#19745)
### Description
update with ONNX 1.16.0 branch according to
https://github.com/microsoft/onnxruntime/blob/main/docs/How_To_Update_ONNX_Dev_Notes.md

ONNX 1.16.0 release notes:
https://github.com/onnx/onnx/releases/tag/v1.16.0

#### Updated ops for CPU EP:
- DequantizeLinear(21)
  - Added int16 and uint16 support + various optimizer tests
  - Missing int4 and uint4 support
  - Missing block dequantization support
- QuantizeLinear(21)
  - Added int16 and uint16 support + various optimizer tests
  - Missing int4 and uint4 support
  - Missing block quantization support
- Cast(21)
  - Missing int4 and uint4 support
- CastLike(21)
  - Missing int4 and uint4 support
- ConstantOfShape(21)
  - Missing int4 and uint4 support
- Identity(21)
  - Missing int4 and uint4 support
- If(21)
  - Missing int4 and uint4 support
- Loop(21)
  - Missing int4 and uint4 support
- Reshape(21)
  - Missing int4 and uint4 support
- Scan(21)
  - Missing int4 and uint4 support
- Shape(21)
  - Missing int4 and uint4 support
- Size(21)
  - Missing int4 and uint4 support
- Flatten(21)
- Missing float8e4m3fnuz, float8e5m2, float8e5m2fnuz, int4, and uint4
support
- Pad(21)
- Missing float8e4m3fnuz, float8e5m2, float8e5m2fnuz, int4, and uint4
support
- Squeeze(21)
- Missing float8e4m3fnuz, float8e5m2, float8e5m2fnuz, int4, and uint4
support
- Transpose(21)
- Missing float8e4m3fnuz, float8e5m2, float8e5m2fnuz, int4, and uint4
support
- Unsqueeze(21)
- Missing float8e4m3fnuz, float8e5m2, float8e5m2fnuz, int4, and uint4
support

#### Unimplemented opset 21 features/ops
- int4 and uint4 data type
- QLinearMatMul(21)
- GroupNormalization(21)
- ai.onnx.ml.TreeEnsemble(5)

### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->

### Disabled tests
#### ORT Training

orttraining/orttraining/test/python/orttraining_test_ort_apis_py_bindings.py
- test_ort_custom_ops: Potential shape inference bug for custom ops

#### Python quantization unit tests
test/onnx/python/quantization (shape inference bug)
- test_op_conv_transpose.py: test_quantize_conv_transpose_u8u8_fp16
- test_op_conv_transpose.py: test_quantize_conv_transpose_s8s8_fp16
- test_op_gemm.py: test_quantize_qop_gemm_s8s8
- test_op_gemm.py: test_quantize_qop_gemm_e4m3fn_same
 - test_op_gemm.py: test_quantize_qop_gemm_e4m3fn_p3
- test_op_matmul.py: test_quantize_matmul_u8u8_f16
- test_op_matmul.py: test_quantize_matmul_s8s8_f16
- test_op_matmul.py: test_quantize_matmul_s8s8_f16_entropy
- test_op_matmul.py: test_quantize_matmul_s8s8_f16_percentile
- test_op_matmul.py: test_quantize_matmul_s8s8_f16_distribution
- test_op_relu.py: test_quantize_qop_relu_s8s8

#### ONNX tests
- test_maxpool_2d_ceil_output_size_reduce_by_one: ONNX 1.16.0 fixed a
maxpool output size bug and added this test. Enable this test when [ORT
PR](https://github.com/microsoft/onnxruntime/pull/18377) is merged.
Refer to original [ONNX PR](https://github.com/onnx/onnx/pull/5741).
- test_ai_onnx_ml_tree_ensemble_set_membership_cpu: new unimplemented op
ai.onnx.ml.TreeEnsemble
- test_ai_onnx_ml_tree_ensemble_single_tree_cpu: same
- test_ai_onnx_ml_tree_ensemble_set_membership_cuda: same
- test_ai_onnx_ml_tree_ensemble_single_tree_cuda: same
- test_cast_INT4_to_FLOAT_cpu: ORT Cast(21) impl doesn't support int4
yet
- test_cast_INT4_to_INT8_cpu: same
- test_cast_UINT4_to_FLOAT_cpu: same
- test_cast_UINT4_to_UINT8_cpu: same
- test_cast_INT4_to_FLOAT_cuda
- test_cast_INT4_to_INT8_cuda
- test_cast_UINT4_to_FLOAT_cuda
- test_cast_UINT4_to_UINT8_cuda
- test_constantofshape_float_ones_cuda: ConstantOfShape(21) not
implemented for cuda
- test_constantofshape_int_shape_zero_cuda: same
- test_constantofshape_int_zeros_cuda: same
- test_flatten_axis0_cuda: Flatten(21) not implemented for cuda
- test_flatten_axis1_cuda: same
- test_flatten_axis2_cuda: same
- test_flatten_axis3_cuda: same
- test_flatten_default_axis_cuda: same
- test_flatten_negative_axis1_cuda: same
- test_flatten_negative_axis2_cuda: same
- test_flatten_negative_axis3_cuda: same
- test_flatten_negative_axis4_cuda: same
- test_qlinearmatmul_2D_int8_float16_cpu: QLinearMatMul(21) for onnx not
implemented in ORT yet
- test_qlinearmatmul_2D_int8_float32_cpu: same
- test_qlinearmatmul_2D_uint8_float16_cpu: same
- test_qlinearmatmul_2D_uint8_float32_cpu: same
- test_qlinearmatmul_3D_int8_float16_cpu: same
- test_qlinearmatmul_3D_int8_float32_cpu: same
- test_qlinearmatmul_3D_uint8_float16_cpu: same
- test_qlinearmatmul_3D_uint8_float32_cpu: same
- test_qlinearmatmul_2D_int8_float16_cuda: same
- test_qlinearmatmul_2D_int8_float32_cuda: same
- test_qlinearmatmul_2D_uint8_float16_cuda: same
- test_qlinearmatmul_2D_uint8_float32_cuda: same
- test_qlinearmatmul_3D_int8_float16_cuda: same
- test_qlinearmatmul_3D_int8_float32_cuda: same
- test_qlinearmatmul_3D_uint8_float16_cuda: same
- test_qlinearmatmul_3D_uint8_float32_cuda: same
- test_size_cuda: Size(21) not implemented for cuda
- test_size_example_cuda: same
- test_dequantizelinear_blocked: Missing implementation for block
dequant for DequantizeLinear(21)
- test_quantizelinear_blocked_asymmetric: Missing implementation for
block quant for QuantizeLinear(21)
- test_quantizelinear_blocked_symmetric: Missing implementation for
block quant for QuantizeLinear(21)

---------

Signed-off-by: liqunfu <liqun.fu@microsoft.com>
Signed-off-by: Ganesan Ramalingam <grama@microsoft.com>
Co-authored-by: Ganesan Ramalingam <grama@microsoft.com>
Co-authored-by: George Wu <jywu@microsoft.com>
Co-authored-by: adrianlizarraga <adlizarraga@microsoft.com>
2024-04-12 09:46:49 -07:00

32 KiB

Operators Support Table

The following table shows ai.onnx operators from which onnx opset version are currently supported by ONNX Runtime Web. For example, 4-6, 8+ means ONNX Runtime Web currently support opset version 4 to 6, 8 and above.

See Compatibility for a list of the supported platforms.

This file is automatically generated from the def files via this script. Do not modify directly.

Operator WebGl Backend
Abs 6-12, 13+
Acos 7+
Acosh
Add 7-12, 13, 14+
AffineGrid
And 7+
ArgMax
ArgMin
Asin 7+
Asinh
Atan 7+
Atanh
AveragePool 7-9, 10, 11-18, 19+
BatchNormalization 7-8, 9-13, 14, 15+
Bernoulli
BitShift
BitwiseAnd
BitwiseNot
BitwiseOr
BitwiseXor
BlackmanWindow
Cast 6-8, 9-12, 13-18, 19-20, 21+
CastLike
Ceil 6-12, 13+
Celu
CenterCropPad
Clip 6-10, 11, 12, 13+
Col2Im
Compress
Concat 4-10, 11-12, 13+
ConcatFromSequence
Constant
ConstantOfShape
Conv 1-10, 11+
ConvInteger
ConvTranspose 1-10, 11+
Cos 7+
Cosh
CumSum
DFT
DeformConv
DepthToSpace 1-10, 11-12, 13+
DequantizeLinear
Det
Div 7-12, 13, 14+
Dropout 7-9, 10-11, 12, 13+
DynamicQuantizeLinear
Einsum
Elu 6+
Equal 7-10, 11-12, 13-18, 19+
Erf
Exp 6-12, 13+
Expand
EyeLike
Flatten 1-8, 9-10, 11-12, 13-20, 21+
Floor 6-12, 13+
GRU
Gather 1-10, 11-12, 13+
GatherElements
GatherND
Gelu
Gemm 7-8, 9-10, 11-12, 13+
GlobalAveragePool 1+
GlobalLpPool
GlobalMaxPool 1+
Greater 7-8, 9-12, 13+
GreaterOrEqual
GridSample
GroupNormalization
HammingWindow
HannWindow
HardSigmoid
HardSwish
Hardmax
Identity 1-12, 13, 14-15, 16-18, 19-20, 21+
If
ImageDecoder
InstanceNormalization 6+
IsInf
IsNaN
LRN 1-12, 13+
LSTM
LayerNormalization
LeakyRelu 6-15, 16+
Less 7-8, 9-12, 13+
LessOrEqual
Log 6-12, 13+
LogSoftmax
Loop
LpNormalization
LpPool
MatMul 1-8, 9-12, 13+
MatMulInteger
Max
MaxPool 1-7, 8-9, 10, 11, 12+
MaxRoiPool
MaxUnpool
Mean
MeanVarianceNormalization
MelWeightMatrix
Min
Mish
Mod
Mul 7-12, 13, 14+
Multinomial
Neg 6-12, 13+
NegativeLogLikelihoodLoss
NonMaxSuppression
NonZero
Not 1+
OneHot
Optional
OptionalGetElement
OptionalHasElement
Or 7+
PRelu 7-8, 9-15, 16+
Pad 2-10, 11-12, 13-17, 18, 19-20, 21+
Pow 7-11, 12, 13-14, 15+
QLinearConv
QLinearMatMul
QuantizeLinear
RNN
RandomNormal
RandomNormalLike
RandomUniform
RandomUniformLike
Range
Reciprocal
ReduceL1
ReduceL2
ReduceLogSum 1-10, 11-12, 13-17, 18+
ReduceLogSumExp
ReduceMax 1-10, 11, 12, 13-17, 18-19, 20+
ReduceMean 1-10, 11-12, 13-17, 18+
ReduceMin 1-10, 11, 12, 13-17, 18-19, 20+
ReduceProd 1-10, 11-12, 13-17, 18+
ReduceSum 1-10, 11-12
ReduceSumSquare 1-10, 11-12, 13-17, 18+
RegexFullMatch
Relu 6-12, 13, 14+
Reshape 5-12, 13, 14-18, 19-20, 21+
Resize 10, 11-12, 13-17, 18, 19+
ReverseSequence
RoiAlign
Round
STFT
Scan
Scatter
ScatterElements
ScatterND
Selu
SequenceAt
SequenceConstruct
SequenceEmpty
SequenceErase
SequenceInsert
SequenceLength
SequenceMap
Shape 1-12, 13-14, 15-18, 19-20, 21+
Shrink
Sigmoid 6-12, 13+
Sign
Sin 7+
Sinh
Size
Slice 1-9, 10, 11-12, 13+
Softmax 1-10, 11-12, 13+
SoftmaxCrossEntropyLoss
Softplus
Softsign
SpaceToDepth
Split 2-10, 11-12
SplitToSequence
Sqrt 6-12, 13+
Squeeze 1-10, 11-12, 13-20, 21+
StringConcat
StringNormalizer
StringSplit
Sub 7-12, 13, 14+
Sum 6-7, 8-12, 13+
Tan 7+
Tanh 6-12, 13+
TfIdfVectorizer
ThresholdedRelu
Tile 6-12, 13+
TopK
Transpose 1-12, 13-20, 21+
Trilu
Unique
Unsqueeze 1-10, 11-12, 13-20, 21+
Upsample 7-8, 9
Where
Xor 7+