ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
Find a file
Bowen Bao e983f37121
Bifurcation detector for aggressive decoding (#9432)
```
Component for aggressive decoding. Find the bifurcation index of predicted tokens, between source tokens,
starting from previous suffix match index, and predicted tokens.
Concat predicted tokens, starting from bifurcation index, to the back
of current tokens. This forms the output tokens.
Detect suffix match index in source tokens, between source tokens and output tokens.
Detection is based on finding the appearances of last n-gram in output tokens
in source tokens.
A match is considered found if source tokens contain a single matching n-gram.
Return the index of the start of the n-gram in source tokens.
No matching if found if src tokens contain multiple or zero matching n-grams. Return -1.
```
2021-10-19 19:53:56 -07:00
.gdn Update compliance tasks in python packaging pipeline and fix some compile warnings (#8471) 2021-07-30 17:16:37 -07:00
.github Update issue template to ask users to check known issues to avoid repetition. (#8288) 2021-07-02 15:36:14 -07:00
cgmanifests Fix to_dlpack Failure on PyTorch-1.10 (#9151) 2021-09-24 09:48:07 +08:00
cmake Kernels for QLinearConv with symmetrically quantized filter (#9323) 2021-10-18 19:40:18 -07:00
csharp [ROCm] static re-hipify of CUDA EP to ROCm EP, now a shared provider (#8877) 2021-10-14 15:15:51 -07:00
dockerfiles Update dockerfile readme (#9241) 2021-10-01 17:28:26 -07:00
docs Bifurcation detector for aggressive decoding (#9432) 2021-10-19 19:53:56 -07:00
include/onnxruntime/core [ROCm] static re-hipify of CUDA EP to ROCm EP, now a shared provider (#8877) 2021-10-14 15:15:51 -07:00
java [ROCm] static re-hipify of CUDA EP to ROCm EP, now a shared provider (#8877) 2021-10-14 15:15:51 -07:00
js [js/web] remove webgl from default fallback list (#9374) 2021-10-14 21:46:22 -07:00
objectivec [Objective-C API] WIgnore clang documentation warnings from C/C++ header usage. (#9057) 2021-09-14 13:03:48 -07:00
onnxruntime Bifurcation detector for aggressive decoding (#9432) 2021-10-19 19:53:56 -07:00
orttraining Ignore all string inputs to ORTModule AB#1310803 (#9344) 2021-10-19 16:34:47 -07:00
package/rpm Bumping up to 1.10 (#9006) 2021-09-22 16:34:28 -07:00
samples Add Python checks pipeline (#7032) 2021-08-09 10:37:05 -07:00
server fix boost download url (#7843) 2021-05-26 16:08:57 -07:00
tools hipify tensor/gather_nd_impl.cu (#9392) 2021-10-19 14:15:49 -07:00
winml Make onnxruntime::Status nodiscard (#9279) 2021-10-08 17:10:31 -07:00
.clang-format
.clang-tidy
.dockerignore Update dockerfiles (#5929) 2020-11-25 15:38:22 -08:00
.flake8 Add Python checks pipeline (#7032) 2021-08-09 10:37:05 -07:00
.gitattributes
.gitignore Add netstandard2.0 framework to nuget managed package. (#8960) 2021-09-04 08:01:46 +10:00
.gitmodules [js/web] update emsdk to v2.0.26 (#8653) 2021-08-26 15:31:34 -07:00
build.amd64.1411.bat
build.bat
build.sh Add iOS test pipeline and a sample app. (#5298) 2020-09-29 13:53:11 -07:00
CODEOWNERS Update ORTTraiing frontend codeowner (#9427) 2021-10-18 23:56:21 -07:00
CONTRIBUTING.md fixed the link (#8757) 2021-08-18 11:45:42 -07:00
LICENSE Remove year from license (#6658) 2021-02-12 00:25:56 -08:00
NuGet.config Delete nuget extra configs (#6477) 2021-01-27 20:25:45 -08:00
ort.wprp
packages.config Update DirectML version to 1.5.1 and enable ARM/ARM64 builds with DML (#7511) 2021-04-30 00:49:30 -07:00
README.md Fix typo 2021-08-12 15:57:15 -07:00
requirements-dev.txt Add post-install command to build PyTorch CPP extensions from within onnxruntime package (#8027) 2021-06-28 18:11:58 -07:00
requirements-doc.txt Add auto doc gen for ORTModule API during CI build (#7046) 2021-03-22 10:20:33 -07:00
requirements-training.txt Add post-install command to build PyTorch CPP extensions from within onnxruntime package (#8027) 2021-06-28 18:11:58 -07:00
requirements.txt.in Chang how numpy version is handled. (#8130) 2021-06-23 14:08:37 -07:00
setup.py Abjindal/merge eager with external custom ops (#8986) 2021-10-14 13:19:45 -07:00
ThirdPartyNotices.txt Extend node debugging utilities to push tensors and node placement to SQL database (#8672) 2021-08-21 00:40:12 -07:00
VERSION_NUMBER Bumping up to 1.10 (#9006) 2021-09-22 16:34:28 -07:00

ONNX Runtime is a cross-platform inference and training machine-learning accelerator.

ONNX Runtime inference can enable faster customer experiences and lower costs, supporting models from deep learning frameworks such as PyTorch and TensorFlow/Keras as well as classical machine learning libraries such as scikit-learn, LightGBM, XGBoost, etc. ONNX Runtime is compatible with different hardware, drivers, and operating systems, and provides optimal performance by leveraging hardware accelerators where applicable alongside graph optimizations and transforms. Learn more →

ONNX Runtime training can accelerate the model training time on multi-node NVIDIA GPUs for transformer models with a one-line addition for existing PyTorch training scripts. Learn more →

Get Started

General Information: onnxruntime.ai

Usage documention and tutorials: onnxruntime.ai/docs

Companion sample repositories:

Build Pipeline Status

System CPU GPU EPs
Windows Build Status Build Status Build Status
Linux Build Status
Build Status
Build Status
Build Status
Build Status
Build Status
Build Status
Build Status
Build Status
Build Status
Build Status
Mac Build Status
Build Status
Android Build Status
iOS Build Status
WebAssembly Build Status

Data/Telemetry

Windows distributions of this project may collect usage data and send it to Microsoft to help improve our products and services. See the privacy statement for more details.

Contributions and Feedback

We welcome contributions! Please see the contribution guidelines.

For feature requests or bug reports, please file a GitHub Issue.

For general discussion or questions, please use GitHub Discussions.

Code of Conduct

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.

License

This project is licensed under the MIT License.