### Description
This change implements accuracy level 4 - quantize A to int8 matmul for
the WebGPU EP. The matmul kernel here uses DP4A for matrix
multiplication, in order to keep the DP4A fed co-operative matrix
multiplication is implemented which preloads the row/col into local
variables before the multiplication operation.
Credits to @qjia7 for help with the quantizer shader.
Performance metrics on intel ADL/TGL GPU.
```
PS C:\onnxruntime> C:\model_benchmark\model_benchmark.exe -i C:\Phi-3.5-mini-instruct-onnx-web\Phi-3.5-mini-instruct-onnx-web -l 500
Batch size: 1, prompt tokens: 501, tokens to generate: 128
Prompt processing (time to first token):
avg (us): 2.76762e+06
**avg (tokens/s): 181.022** <<< Prefill speed
p50 (us): 2.74843e+06
stddev (us): 41756.4
n: 5 * 501 token(s)
Token generation:
avg (us): 81500.7
avg (tokens/s): 12.2698
p50 (us): 81104.1
stddev (us): 2961.31
n: 635 * 1 token(s)
Token sampling:
avg (us): 13.1836
avg (tokens/s): 75851.9
p50 (us): 12
stddev (us): 6.47085
n: 640 * 1 token(s)
E2E generation (entire generation loop):
avg (ms): 13120
p50 (ms): 13081.6
stddev (ms): 114.689
n: 5
Peak working set size (bytes): 5467533312
WebGPU device lost (2): Device was destroyed.
```
This kernel is 2.10x faster than its F16 counterpart for a 500 token
prefill. Previous prefill record is 86tks/s.
In order to support devices with subgroup size 8/32, a no subgroup
version of the same shader is included. Performance is slower than the
subgroup version on ADL.
```
PS C:\onnxruntime> C:\model_benchmark\model_benchmark.exe -i C:\Phi-3.5-mini-instruct-onnx-web\Phi-3.5-mini-instruct-onnx-web -l 500
Batch size: 1, prompt tokens: 501, tokens to generate: 128
Prompt processing (time to first token):
avg (us): 4.11989e+06
avg (tokens/s): 121.605
p50 (us): 4.11847e+06
stddev (us): 2147.48
n: 5 * 501 token(s)
Token generation:
avg (us): 81174.9
avg (tokens/s): 12.3191
p50 (us): 81301.1
stddev (us): 2177.2
n: 635 * 1 token(s)
Token sampling:
avg (us): 14.7998
avg (tokens/s): 67568.3
p50 (us): 12.3
stddev (us): 11.5481
n: 640 * 1 token(s)
E2E generation (entire generation loop):
avg (ms): 14431.1
p50 (ms): 14433.8
stddev (ms): 5.02473
n: 5
Peak working set size (bytes): 5466480640
WebGPU device lost (2): Device was destroyed.
```
|
||
|---|---|---|
| .config | ||
| .devcontainer | ||
| .gdn | ||
| .github | ||
| .pipelines | ||
| .vscode | ||
| cgmanifests | ||
| cmake | ||
| csharp | ||
| dockerfiles | ||
| docs | ||
| include/onnxruntime/core | ||
| java | ||
| js | ||
| objectivec | ||
| onnxruntime | ||
| orttraining | ||
| rust | ||
| samples | ||
| tools | ||
| winml | ||
| .clang-format | ||
| .clang-tidy | ||
| .dockerignore | ||
| .gitattributes | ||
| .gitignore | ||
| .gitmodules | ||
| .lintrunner.toml | ||
| build.bat | ||
| build.sh | ||
| build_arm64x.bat | ||
| CITATION.cff | ||
| CODEOWNERS | ||
| CONTRIBUTING.md | ||
| CPPLINT.cfg | ||
| lgtm.yml | ||
| LICENSE | ||
| NuGet.config | ||
| ort.wprp | ||
| ORT_icon_for_light_bg.png | ||
| packages.config | ||
| pyproject.toml | ||
| README.md | ||
| requirements-dev.txt | ||
| requirements-doc.txt | ||
| requirements-lintrunner.txt | ||
| requirements-training.txt | ||
| requirements.txt | ||
| SECURITY.md | ||
| setup.py | ||
| ThirdPartyNotices.txt | ||
| VERSION_NUMBER | ||

ONNX Runtime is a cross-platform inference and training machine-learning accelerator.
ONNX Runtime inference can enable faster customer experiences and lower costs, supporting models from deep learning frameworks such as PyTorch and TensorFlow/Keras as well as classical machine learning libraries such as scikit-learn, LightGBM, XGBoost, etc. ONNX Runtime is compatible with different hardware, drivers, and operating systems, and provides optimal performance by leveraging hardware accelerators where applicable alongside graph optimizations and transforms. Learn more →
ONNX Runtime training can accelerate the model training time on multi-node NVIDIA GPUs for transformer models with a one-line addition for existing PyTorch training scripts. Learn more →
Get Started & Resources
-
General Information: onnxruntime.ai
-
Usage documentation and tutorials: onnxruntime.ai/docs
-
YouTube video tutorials: youtube.com/@ONNXRuntime
-
Companion sample repositories:
- ONNX Runtime Inferencing: microsoft/onnxruntime-inference-examples
- ONNX Runtime Training: microsoft/onnxruntime-training-examples
Builtin Pipeline Status
| System | Inference | Training |
|---|---|---|
| Windows | ||
| Linux | ||
| Mac | ||
| Android | ||
| iOS | ||
| Web | ||
| Other |
This project is tested with BrowserStack.
Third-party Pipeline Status
| System | Inference | Training |
|---|---|---|
| Linux |
Releases
The current release and past releases can be found here: https://github.com/microsoft/onnxruntime/releases.
For details on the upcoming release, including release dates, announcements, features, and guidance on submitting feature requests, please visit the release roadmap: https://onnxruntime.ai/roadmap.
Data/Telemetry
Windows distributions of this project may collect usage data and send it to Microsoft to help improve our products and services. See the privacy statement for more details.
Contributions and Feedback
We welcome contributions! Please see the contribution guidelines.
For feature requests or bug reports, please file a GitHub Issue.
For general discussion or questions, please use GitHub Discussions.
Code of Conduct
This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.
License
This project is licensed under the MIT License.