ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
Find a file
Arthur Islamov d0519a7603
[js/web] BiasSplitGelu and BiasAdd kernels (#17161)
### Description
Two contrib kernels that supposed to speed-up StableDiffusion according
to this doc
https://github.com/microsoft/onnxruntime/blob/main/onnxruntime/python/tools/transformers/models/stable_diffusion/README.md

However, there is no noticable effect in speed or memory consumption. So
i guess the only way to make it faster is to implement
MultiHeadAttention but i'm not capable of doing that right now. So i'll
focus on existing PRs and finding the JSEP kernel that produces
incorrect results. It should be one of the old ones (i suspect Conv or
ConvTranspose), as SD was not generating images correctly on webgpu
since i started working on it. I hoped someone else would fix that by
the time i finish with kernels/optimizations 😅

---------

Co-authored-by: Guenther Schmuelling <guschmue@microsoft.com>
Co-authored-by: Yulong Wang <7679871+fs-eire@users.noreply.github.com>
2023-10-03 12:20:20 -07:00
.config Update tsaoptions.json: update the email alias (#13448) 2022-10-26 15:56:16 -07:00
.devcontainer Remove two lines in the Dockerfile for Github Codespace (#12278) 2022-07-21 20:52:17 -07:00
.gdn Update win-ci-pipeline.yml: enable xnnpack tests (#16244) 2023-06-14 19:12:42 -07:00
.github Bump actions/checkout from 3 to 4 (#17487) 2023-09-13 09:22:21 -07:00
.pipelines Bump DirectML version from 1.12.0 to 1.12.1 (#17225) 2023-08-20 09:55:38 -07:00
.vscode Close the JSON object in settings.json (#17583) 2023-09-26 09:51:13 -07:00
cgmanifests ONNX 1.15 integration (#17125) 2023-09-26 14:44:48 -07:00
cmake Enable backtrace in unit tests (#17655) 2023-09-29 12:32:56 -07:00
csharp [On-Device Training] Expose Parameters through the Training API (#17364) 2023-09-25 20:03:24 -07:00
dockerfiles Update cmake to 3.27 and upgrade Linux CUDA docker files from CentOS7 to UBI8 (#16856) 2023-09-05 18:12:10 -07:00
docs ONNX 1.15 integration (#17125) 2023-09-26 14:44:48 -07:00
include/onnxruntime/core Add support for specifying a custom logging function per session. (#17727) 2023-09-29 19:46:55 -07:00
java [java] Filling out the javadoc for the float8 types (#17694) 2023-09-27 10:52:11 -07:00
js [js/web] BiasSplitGelu and BiasAdd kernels (#17161) 2023-10-03 12:20:20 -07:00
objectivec Objective-C Add Support to Create and Query String ORTValues (#16764) 2023-07-20 17:39:29 -07:00
onnxruntime [js/web] BiasSplitGelu and BiasAdd kernels (#17161) 2023-10-03 12:20:20 -07:00
orttraining Python API to check whether collective ops are available or not (#17730) 2023-09-29 14:11:05 -07:00
rust rust bindings: Do not unnecessarily re-run build.rs (#17018) 2023-09-05 19:42:06 -07:00
samples Enable pylint and numpy rules (#15218) 2023-03-27 20:37:53 -07:00
swift/OnnxRuntimeBindingsTests Add iOS Swift Package Manager support (#15297) 2023-04-20 16:18:35 +10:00
tools [js/webgpu] support IO binding (#17480) 2023-09-29 11:24:42 -07:00
winml Add support for specifying a custom logging function per session. (#17727) 2023-09-29 19:46:55 -07:00
.clang-format Prevent GSL_SUPPRESS arguments from being modified by clang-format (#17242) 2023-08-22 18:26:53 -07:00
.clang-tidy Create clang-tidy CI (#12653) 2022-09-30 08:05:38 -07:00
.dockerignore
.gitattributes
.gitignore remove 'lib/' from .gitignore (#15613) 2023-04-24 18:43:32 -07:00
.gitmodules Remove onnxruntime extensions from list of gitmodules (#17615) 2023-09-19 17:12:14 -07:00
.lintrunner.toml Format c++ code under winml/ (#16660) 2023-07-25 21:56:50 -07:00
build.bat try to find patch.exe in git default installation folder (#17106) 2023-08-10 21:48:13 -07:00
build.sh Upgrade old Python version in packaging pipeline (#16667) 2023-07-17 08:24:47 -07:00
CITATION.cff
CODEOWNERS Add owners for public facing API files (#15288) 2023-03-30 17:16:15 -07:00
CONTRIBUTING.md Fix link to High Level Design (#11786) 2023-02-28 11:05:54 -08:00
lgtm.yml Fix lgtm C++ error (#13613) 2022-11-10 10:06:22 -08:00
LICENSE
NuGet.config
ort.wprp
ORT_icon_for_light_bg.png
Package.swift Objective-C Add Support to Create and Query String ORTValues (#16764) 2023-07-20 17:39:29 -07:00
packages.config Bump DirectML version from 1.12.0 to 1.12.1 (#17225) 2023-08-20 09:55:38 -07:00
pyproject.toml Updating QDQ to support Float8E4M3FN (#16550) 2023-08-08 12:18:48 +02:00
README.md add third-party pipeline status to README.md (#16155) 2023-05-31 22:14:39 -07:00
requirements-dev.txt ONNX 1.15 integration (#17125) 2023-09-26 14:44:48 -07:00
requirements-doc.txt
requirements-lintrunner.txt Bump clang-format to 16.0.6 in CI (#17099) 2023-08-10 13:53:04 -07:00
requirements-training.txt ONNX 1.15 integration (#17125) 2023-09-26 14:44:48 -07:00
requirements.txt.in
SECURITY.md Microsoft mandatory file (#11619) 2022-05-25 13:56:10 -07:00
setup.py Update tensorrt_dependencies in setup.py (#17562) 2023-09-15 08:20:47 -07:00
ThirdPartyNotices.txt Flash Attention v2 MHA (#17227) 2023-08-31 13:52:21 -07:00
VERSION_NUMBER Bump Up Version to 1.17.0 (#17587) 2023-09-20 11:02:58 +08:00

ONNX Runtime is a cross-platform inference and training machine-learning accelerator.

ONNX Runtime inference can enable faster customer experiences and lower costs, supporting models from deep learning frameworks such as PyTorch and TensorFlow/Keras as well as classical machine learning libraries such as scikit-learn, LightGBM, XGBoost, etc. ONNX Runtime is compatible with different hardware, drivers, and operating systems, and provides optimal performance by leveraging hardware accelerators where applicable alongside graph optimizations and transforms. Learn more →

ONNX Runtime training can accelerate the model training time on multi-node NVIDIA GPUs for transformer models with a one-line addition for existing PyTorch training scripts. Learn more →

Get Started & Resources

Builtin Pipeline Status

System Inference Training
Windows Build Status
Build Status
Build Status
Linux Build Status
Build Status
Build Status
Build Status
Build Status
Build Status
Build Status
Build Status
Mac Build Status
Android Build Status
iOS Build Status
Web Build Status
Other Build Status
Build Status

Third-party Pipeline Status

System Inference Training
Linux Build Status

Data/Telemetry

Windows distributions of this project may collect usage data and send it to Microsoft to help improve our products and services. See the privacy statement for more details.

Contributions and Feedback

We welcome contributions! Please see the contribution guidelines.

For feature requests or bug reports, please file a GitHub Issue.

For general discussion or questions, please use GitHub Discussions.

Code of Conduct

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.

License

This project is licensed under the MIT License.