ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
Find a file
sushraja-msft 58c29d34b2
WIP: Dp4MatMulNBits accuracy level 4 matmul for WebGPU EP (#23365)
### Description

This change implements accuracy level 4 - quantize A to int8 matmul for
the WebGPU EP. The matmul kernel here uses DP4A for matrix
multiplication, in order to keep the DP4A fed co-operative matrix
multiplication is implemented which preloads the row/col into local
variables before the multiplication operation.

Credits to @qjia7 for help with the quantizer shader.

Performance metrics on intel ADL/TGL GPU.

```
PS C:\onnxruntime> C:\model_benchmark\model_benchmark.exe -i C:\Phi-3.5-mini-instruct-onnx-web\Phi-3.5-mini-instruct-onnx-web -l 500
Batch size: 1, prompt tokens: 501, tokens to generate: 128
Prompt processing (time to first token):
        avg (us):       2.76762e+06
        **avg (tokens/s): 181.022**   <<< Prefill speed
        p50 (us):       2.74843e+06
        stddev (us):    41756.4
        n:              5 * 501 token(s)
Token generation:
        avg (us):       81500.7
        avg (tokens/s): 12.2698
        p50 (us):       81104.1
        stddev (us):    2961.31
        n:              635 * 1 token(s)
Token sampling:
        avg (us):       13.1836
        avg (tokens/s): 75851.9
        p50 (us):       12
        stddev (us):    6.47085
        n:              640 * 1 token(s)
E2E generation (entire generation loop):
        avg (ms):       13120
        p50 (ms):       13081.6
        stddev (ms):    114.689
        n:              5
Peak working set size (bytes): 5467533312
WebGPU device lost (2): Device was destroyed.

```
This kernel is 2.10x faster than its F16 counterpart for a 500 token
prefill. Previous prefill record is 86tks/s.

In order to support devices with subgroup size 8/32, a no subgroup
version of the same shader is included. Performance is slower than the
subgroup version on ADL.

```
PS C:\onnxruntime> C:\model_benchmark\model_benchmark.exe -i C:\Phi-3.5-mini-instruct-onnx-web\Phi-3.5-mini-instruct-onnx-web -l 500 
Batch size: 1, prompt tokens: 501, tokens to generate: 128
Prompt processing (time to first token):
        avg (us):       4.11989e+06
        avg (tokens/s): 121.605
        p50 (us):       4.11847e+06
        stddev (us):    2147.48
        n:              5 * 501 token(s)
Token generation:
        avg (us):       81174.9
        avg (tokens/s): 12.3191
        p50 (us):       81301.1
        stddev (us):    2177.2
        n:              635 * 1 token(s)
Token sampling:
        avg (us):       14.7998
        avg (tokens/s): 67568.3
        p50 (us):       12.3
        stddev (us):    11.5481
        n:              640 * 1 token(s)
E2E generation (entire generation loop):
        avg (ms):       14431.1
        p50 (ms):       14433.8
        stddev (ms):    5.02473
        n:              5
Peak working set size (bytes): 5466480640
WebGPU device lost (2): Device was destroyed.
```
2025-01-21 15:46:51 -08:00
.config Auto-generated baselines by 1ES Pipeline Templates (#22817) 2024-11-13 13:50:52 -08:00
.devcontainer
.gdn
.github Update MACOSX_DEPLOYMENT_TARGET (#23308) 2025-01-10 14:25:32 -08:00
.pipelines [DML EP] Update DML to 1.15.4 (#22635) 2024-10-29 17:13:57 -07:00
.vscode Stop VSCode appending file associations to settings.json (#21944) 2024-08-31 19:04:12 -07:00
cgmanifests Update xnnpack, cpuinfo and pthreadpool (#23362) 2025-01-15 09:42:15 -08:00
cmake Fix eigen external deps (#23439) 2025-01-21 12:40:42 -08:00
csharp Target py310 and modernize codebase with ruff (#23401) 2025-01-16 19:10:14 -08:00
dockerfiles Update range of gpu arch (#23309) 2025-01-14 14:27:34 -08:00
docs Implement some missing element wise Add/Sub/Mul/Div/Neg operations for CPU and CUDA EPs (#23090) 2025-01-20 16:46:45 -08:00
include/onnxruntime/core Add QNN EP HTP shared memory allocator (#23136) 2025-01-14 11:09:50 -08:00
java Revert DML pipeline changes (#23135) 2024-12-18 10:42:10 -08:00
js Upgrade Java version from react-native/android to Java 17 (#23066) 2025-01-18 08:51:06 -08:00
objectivec Use UTF8 string encoding in ORTSaveCodeAndDescriptionToError(). (#22982) 2024-12-02 17:41:52 -08:00
onnxruntime WIP: Dp4MatMulNBits accuracy level 4 matmul for WebGPU EP (#23365) 2025-01-21 15:46:51 -08:00
orttraining Enable comprehension simplification in ruff rules (#23414) 2025-01-17 08:43:06 -08:00
rust Fix typos according to reviewdog report. (#21335) 2024-07-22 13:37:32 -07:00
samples
tools Moving RN_CI Android Testing to Linux (#23422) 2025-01-21 11:55:29 -08:00
winml Bump clang-format from 18.1.8 to 19.1.6 (#23346) 2025-01-14 09:02:04 -08:00
.clang-format
.clang-tidy
.dockerignore
.gitattributes Fix typos according to reviewdog report. (#21335) 2024-07-22 13:37:32 -07:00
.gitignore Build onnxruntime.dll as arm64x (#18633) 2023-12-06 16:49:00 -08:00
.gitmodules Revert "Upgrade emsdk from 3.1.59 to 3.1.62" (#21817) 2024-08-22 11:21:00 -07:00
.lintrunner.toml Use ruff as the formatter to replace black-isort (#23397) 2025-01-16 11:14:15 -08:00
build.bat
build.sh
build_arm64x.bat remove unnecessary environment variable (#19166) 2024-01-16 16:24:37 -08:00
CITATION.cff Fix citation author name issue (#19597) 2024-02-22 17:03:56 -08:00
CODEOWNERS Update CODEOWNERS: remove onnxruntime-es (#21677) 2024-12-17 13:39:13 -08:00
CONTRIBUTING.md
CPPLINT.cfg Ignore all whitespace lint messages for cpplint (#22781) 2024-11-08 14:31:28 -08:00
lgtm.yml
LICENSE
NuGet.config Update C# test projects (#21631) 2024-09-05 08:21:23 +10:00
ort.wprp Fully dynamic ETW controlled logging for ORT and QNN logs (#20537) 2024-06-06 21:11:14 -07:00
ORT_icon_for_light_bg.png
packages.config [DML EP] Update DML to 1.15.4 (#22635) 2024-10-29 17:13:57 -07:00
pyproject.toml Enable comprehension simplification in ruff rules (#23414) 2025-01-17 08:43:06 -08:00
README.md Update pipeline status (#22924) 2024-11-24 21:26:27 -08:00
requirements-dev.txt Update python version metadata (remove 3.7, 3.8, 3.9; add 3.13). (#23067) 2024-12-17 10:59:20 -08:00
requirements-doc.txt
requirements-lintrunner.txt Bump clang-format from 19.1.6 to 19.1.7 (#23428) 2025-01-21 12:41:16 -08:00
requirements-training.txt
requirements.txt Add compatibility for NumPy 2.0 (#21085) 2024-06-27 13:50:53 -07:00
SECURITY.md
setup.py Enable comprehension simplification in ruff rules (#23414) 2025-01-17 08:43:06 -08:00
ThirdPartyNotices.txt Cleanup code (#22827) 2024-11-19 14:13:33 -08:00
VERSION_NUMBER bumps up version in main from 1.20 -> 1.21 (#22482) 2024-10-17 12:32:35 -07:00

ONNX Runtime is a cross-platform inference and training machine-learning accelerator.

ONNX Runtime inference can enable faster customer experiences and lower costs, supporting models from deep learning frameworks such as PyTorch and TensorFlow/Keras as well as classical machine learning libraries such as scikit-learn, LightGBM, XGBoost, etc. ONNX Runtime is compatible with different hardware, drivers, and operating systems, and provides optimal performance by leveraging hardware accelerators where applicable alongside graph optimizations and transforms. Learn more →

ONNX Runtime training can accelerate the model training time on multi-node NVIDIA GPUs for transformer models with a one-line addition for existing PyTorch training scripts. Learn more →

Get Started & Resources

Builtin Pipeline Status

System Inference Training
Windows Build Status
Build Status
Build Status
Build Status
Linux Build Status
Build Status
Build Status
Build Status
Build Status
Build Status
Build Status
Mac Build Status
Android Build Status
iOS Build Status
Web Build Status
Other Build Status

This project is tested with BrowserStack.

Third-party Pipeline Status

System Inference Training
Linux Build Status

Releases

The current release and past releases can be found here: https://github.com/microsoft/onnxruntime/releases.

For details on the upcoming release, including release dates, announcements, features, and guidance on submitting feature requests, please visit the release roadmap: https://onnxruntime.ai/roadmap.

Data/Telemetry

Windows distributions of this project may collect usage data and send it to Microsoft to help improve our products and services. See the privacy statement for more details.

Contributions and Feedback

We welcome contributions! Please see the contribution guidelines.

For feature requests or bug reports, please file a GitHub Issue.

For general discussion or questions, please use GitHub Discussions.

Code of Conduct

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.

License

This project is licensed under the MIT License.