ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
Find a file
kunal-vaishnavi 901c2bc384
Whisper Model Optimization (#15473)
### Description
This PR contains fusion-level and kernel-level optimizations for
[OpenAI's Whisper](https://github.com/openai/whisper).

Some of the added optimizations include:

- Pruning of duplicate/unnecessary inputs and outputs
- Fusion support for Whisper models with or without these inputs/outputs
(e.g. with these inputs/outputs if exporting with an older official
Optimum version, without these inputs/outputs if exporting with Optimum
from source)
- Attention fusions
   - For Whisper's encoder and decoder
- Modified symbolic shape inference for present output when no past
input exists (for decoder)
- Multi-head attention fusions
   - For Whisper's decoder and decoder with past
- Packed MatMul for the 3 MatMuls excluded in multi-head attention
fusion
- Attention kernel changes
   - CPU:
      - Different Q and KV sequence lengths
      - Parallel memset for large sequence lengths
- Convert broadcast add after MatMul of Q and K (add_qk) to element-wise
add
- Separate present key-value output into present key and present value
(for multi-head attention spec)
   - CUDA:
- Use memory efficient attention compute kernel with present state (for
decoder)
- Multi-head attention kernel changes
   - CPU:
- Introduction of multi-head attention CPU kernel (previously did not
exist)
- Use AddBiasReshape instead of AddBiasTranspose when sequence length =
1 (for decoder with past)
      - Different Q, K, V input shapes
      - Pass past key and past value directly as key and value
   - CUDA:
- Use memory efficient attention compute kernel with past and/or present
state (for decoder with past)

### Usage
To use the optimizations, run the ORT transformer optimizer script as
follows:
```
$ cd onnxruntime/onnxruntime/python/tools/transformers/
$ python3 optimizer.py --input <filename>.onnx --output <filename>.onnx --model_type bart --num_heads <number of attention heads, depends on the size of the whisper model used> --hidden_size <attention hidden size, depends on the size of the whisper model used> --use_external_data_format --use_multi_head_attention
```

Once optimized, here's an example of how to run Whisper with [Hugging
Face's Optimum](https://github.com/huggingface/optimum):
```
from transformers.onnx.utils import get_preprocessor
from optimum.onnxruntime import ORTModelForSpeechSeq2Seq
from optimum.pipelines import pipeline as ort_pipeline

import whisper # Installed from OpenAI's repo - setup instructions at https://github.com/openai/whisper/

directory = './whisper_opt' # Where the optimized ONNX models are located
model_name = 'openai/whisper-tiny'
device = 'cpu'

# Get pipeline
processor = get_preprocessor(model_name)
model = ORTModelForSpeechSeq2Seq.from_pretrained(
    directory,
    use_io_binding=(device == 'cuda'),
    provider='CPUExecutionProvider',
).to(device)
pipe = ort_pipeline(
    "automatic-speech-recognition",
    model=model,
    tokenizer=processor.tokenizer,
    feature_extractor=processor.feature_extractor,
    device=(-1 if device == 'cpu' else 0),
)

# Load audio file and run pipeline
audio = whisper.load_audio('tests/jfk.flac')
audio = whisper.pad_or_trim(audio)
outputs = pipe([audio])
print(outputs)
```

Note: In order to use these changes with Optimum, it is recommended to
use Optimum from source to have the following changes:
- https://github.com/huggingface/optimum/pull/872
- https://github.com/huggingface/optimum/pull/920

### Motivation and Context
This PR helps the following issues:
- https://github.com/microsoft/onnxruntime/issues/15100
- https://github.com/microsoft/onnxruntime/issues/15235
- https://github.com/huggingface/optimum/issues/869 (work in progress)

This PR can be used with the other currently merged Whisper PRs:
- https://github.com/microsoft/onnxruntime/pull/15247
- https://github.com/microsoft/onnxruntime/pull/15339
- https://github.com/microsoft/onnxruntime/pull/15362
- https://github.com/microsoft/onnxruntime/pull/15365
- https://github.com/microsoft/onnxruntime/pull/15427

This PR uses changes from the following merged PRs:
- https://github.com/microsoft/onnxruntime/pull/14198
- https://github.com/microsoft/onnxruntime/pull/14146
- https://github.com/microsoft/onnxruntime/pull/14201
- https://github.com/microsoft/onnxruntime/pull/14928 (this introduced
the new multi-head attention spec)
2023-04-18 17:13:54 -07:00
.config Update tsaoptions.json: update the email alias (#13448) 2022-10-26 15:56:16 -07:00
.devcontainer Remove two lines in the Dockerfile for Github Codespace (#12278) 2022-07-21 20:52:17 -07:00
.gdn Update compliance tasks in python packaging pipeline and fix some compile warnings (#8471) 2021-07-30 17:16:37 -07:00
.github Add workflow to update Objective-C docs. (#15413) 2023-04-07 15:00:15 -07:00
.pipelines WindowsAI build failing due to deprecated .NET5 SDK missing in build image (#15383) 2023-04-06 08:51:07 -07:00
.vscode cpplint & Eager mode: refactor and add comments to empty_* functions, general lint cleanup in ort_aten (#12238) 2022-07-20 11:47:57 -04:00
cgmanifests update with onnx main (#14929) 2023-04-18 08:42:51 -07:00
cmake Whisper Model Optimization (#15473) 2023-04-18 17:13:54 -07:00
csharp Run clang-format in CI (#15524) 2023-04-18 09:26:58 -07:00
dockerfiles Update build.py to disallow running as root user by default. (#15164) 2023-03-27 14:46:04 -07:00
docs Whisper Model Optimization (#15473) 2023-04-18 17:13:54 -07:00
include/onnxruntime/core Run clang-format in CI (#15524) 2023-04-18 09:26:58 -07:00
java [java] Allows the creation and extraction of zero length tensors (#15116) 2023-04-05 10:49:59 -07:00
js [wasm] optimize default session options parsing (#15428) 2023-04-10 11:09:09 -07:00
objectivec Add workflow to update Objective-C docs. (#15413) 2023-04-07 15:00:15 -07:00
onnxruntime Whisper Model Optimization (#15473) 2023-04-18 17:13:54 -07:00
orttraining Fix lint errors missed due to new commits (#15558) 2023-04-18 12:55:02 -07:00
package/rpm Bump ORT version number (#14226) 2023-01-26 12:33:47 -08:00
rust Add rust bindings (#12606) 2023-02-08 14:57:15 -08:00
samples Enable pylint and numpy rules (#15218) 2023-03-27 20:37:53 -07:00
tools update with onnx main (#14929) 2023-04-18 08:42:51 -07:00
winml Disable XNNPack EP's tests in Windows CI pipeline (#15406) 2023-04-13 12:19:32 -07:00
.clang-format Run clang-format in CI (#15524) 2023-04-18 09:26:58 -07:00
.clang-tidy Create clang-tidy CI (#12653) 2022-09-30 08:05:38 -07:00
.dockerignore Update dockerfiles (#5929) 2020-11-25 15:38:22 -08:00
.gitattributes Initial bootstrap commit. 2018-11-19 16:48:22 -08:00
.gitignore Bump ruff in CI (#15533) 2023-04-17 10:11:44 -07:00
.gitmodules Remove protobuf submodule (#15190) 2023-03-27 10:35:49 -07:00
.lintrunner.toml Run clang-format in CI (#15524) 2023-04-18 09:26:58 -07:00
build.amd64.1411.bat Initial bootstrap commit. 2018-11-19 16:48:22 -08:00
build.bat Initial bootstrap commit. 2018-11-19 16:48:22 -08:00
build.sh Add iOS test pipeline and a sample app. (#5298) 2020-09-29 13:53:11 -07:00
CITATION.cff Fix CITATION.cff and add automatic validation of your citation metadata (#10478) 2022-04-13 10:03:52 -07:00
CODEOWNERS Add owners for public facing API files (#15288) 2023-03-30 17:16:15 -07:00
CONTRIBUTING.md Fix link to High Level Design (#11786) 2023-02-28 11:05:54 -08:00
lgtm.yml Fix lgtm C++ error (#13613) 2022-11-10 10:06:22 -08:00
LICENSE Remove year from license (#6658) 2021-02-12 00:25:56 -08:00
NuGet.config Delete nuget extra configs (#6477) 2021-01-27 20:25:45 -08:00
ort.wprp Add Tracelogging for profiling (#1639) 2019-11-11 21:34:10 -08:00
ORT_icon_for_light_bg.png Update nuget icon (#10672) 2022-03-01 09:11:03 -08:00
packages.config Download protoc.exe from nuget when cross-compiling (#15395) 2023-04-06 17:06:59 -07:00
pyproject.toml Bump ruff in CI (#15533) 2023-04-17 10:11:44 -07:00
README.md [Readme] Update table for build pipelines (#14618) 2023-02-08 09:44:20 -08:00
requirements-dev.txt Remove codecov from requirements-dev.txt (#15487) 2023-04-12 18:48:02 -07:00
requirements-doc.txt Add auto doc gen for ORTModule API during CI build (#7046) 2021-03-22 10:20:33 -07:00
requirements-training.txt Remove protobuf pin from training requirements (#13695) 2022-11-22 12:27:18 -08:00
requirements.txt.in Add additional python requirements (#11522) 2022-05-20 16:16:18 -07:00
SECURITY.md Microsoft mandatory file (#11619) 2022-05-25 13:56:10 -07:00
setup.py Adopt linrtunner as the linting tool - take 2 (#15085) 2023-03-24 15:29:03 -07:00
ThirdPartyNotices.txt Revert mimalloc from v2.0.9 to v2.0.3 (#14603) 2023-02-07 09:58:25 -08:00
VERSION_NUMBER Bump ORT version number (#14226) 2023-01-26 12:33:47 -08:00

ONNX Runtime is a cross-platform inference and training machine-learning accelerator.

ONNX Runtime inference can enable faster customer experiences and lower costs, supporting models from deep learning frameworks such as PyTorch and TensorFlow/Keras as well as classical machine learning libraries such as scikit-learn, LightGBM, XGBoost, etc. ONNX Runtime is compatible with different hardware, drivers, and operating systems, and provides optimal performance by leveraging hardware accelerators where applicable alongside graph optimizations and transforms. Learn more →

ONNX Runtime training can accelerate the model training time on multi-node NVIDIA GPUs for transformer models with a one-line addition for existing PyTorch training scripts. Learn more →

Get Started & Resources

Build Pipeline Status

System Inference Training
Windows Build Status
Build Status
Build Status
Linux Build Status
Build Status
Build Status
Build Status
Build Status
Build Status
Build Status
Build Status
Mac Build Status
Android Build Status
iOS Build Status
Web Build Status
Other Build Status
Build Status

Data/Telemetry

Windows distributions of this project may collect usage data and send it to Microsoft to help improve our products and services. See the privacy statement for more details.

Contributions and Feedback

We welcome contributions! Please see the contribution guidelines.

For feature requests or bug reports, please file a GitHub Issue.

For general discussion or questions, please use GitHub Discussions.

Code of Conduct

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.

License

This project is licensed under the MIT License.