onnxruntime/docs/onnxruntime_extensions.md
Ye Wang 83dc22585c
Second round cherry-pick to rel-1.9.0 (#9062)
* Adding async fetching for webgl backend (#8951)

* Adding async fetching for webgl backend

* fix PR comments and CI failure.

* fixing a bug

* adding a flag

* Enable linking in exception throwing support library when build onnxruntime wasm. (#8973)

* Enable linking in exception throwing support library when build onnxruntime webassembly containing onnxruntime-extensions.

* Add flag in build.py to enable linking exceptions throwing library.

* Update onnxruntime-extensions document and bind custom_ops build flag with use_extensions.

* Update doc.

* Update cgmanifest.json.

Co-authored-by: Zuwei Zhao <zuzhao@microsoft.com>

* Remove document text from error message in a couple of ops (#9003)

* do not add pkg wheel entry to the index html file if it already exists (#9004)

* do not add pkg wheel entry to the index html file if it already exists

* [js/web] fix ort web e2e test (#9025)

* Fix cmake POWER10 detection

Recent commit 60c98a8 changed variable mlas_common_srcs which affects
POWER10 detection.

* Fix Where op type reduction processing (#9033)

* Update type reduction script to track Where Op's second input type.

* Clean up op_kernel_type_control.h includes.

* Use more maintainable include.

* Fix ROCm wheels CI pipeline break by installing latest protobuf from source (#9047)

* install protobuf from source

* fix rm command in Dockerfile

* fix options on rm command

* fix cd into protobuf source directory

* try again

* remove strip step

* debug list the files

* ls on /usr

* more debug

* more debug

* adjust LD_LIBRARY_PATH

* try remove protobuf before ORT build

* [js/web] a bugfix and add tests for wasm proxy worker (#9048)

* [js/web] add tests for wasm proxy worker

* fix script src override

* Set onnxruntime_DISABLE_RTTI to default OFF (#9049)

Co-authored-by: Du Li <duli1@microsoft.com>
Co-authored-by: Zuwei Zhao <4123666+Zuwei-Zhao@users.noreply.github.com>
Co-authored-by: Zuwei Zhao <zuzhao@microsoft.com>
Co-authored-by: Hariharan Seshadri <shariharan91@gmail.com>
Co-authored-by: liqun Fu <liqfu@microsoft.com>
Co-authored-by: Yulong Wang <yulongw@microsoft.com>
Co-authored-by: Rajalakshmi Srinivasaraghavan <rajis@linux.ibm.com>
Co-authored-by: Edward Chen <18449977+edgchen1@users.noreply.github.com>
Co-authored-by: Suffian Khan <sukha@microsoft.com>
Co-authored-by: Changming Sun <chasun@microsoft.com>
2021-09-15 18:02:07 -07:00

6.2 KiB

ONNXRuntime Extensions

ONNXRuntime Extensions is a comprehensive package to extend the capability of the ONNX conversion and inference. Please visit the documentation onnxruntime-extensions to learn more about ONNXRuntime Extensions.

Custom Operators Supported

onnxruntime-extensions supports many useful custom operators to enhance the text processing capability of ONNXRuntime, which include some widely used string operators and popular tokenizers. For custom operators supported and how to use them, please check the documentation custom operators.

Build ONNXRuntime with Extensions

We have supported build onnxruntime-extensions as a static library and link it into ONNXRuntime. To enable custom operators from onnxruntime-extensions, you should add argument --use_extensions, which will use onnxruntime-extensions from git submodule in path cmake/external/onnxruntime-extensions by default.

If you want to build ONNXRuntime with a pre-pulled onnxruntime-extensions, pass extra argument --extensions_overridden_path <path-to-onnxruntime-extensions>.

Note: Please remember to use --minimal_build custom_ops when you build minimal runtime with custom operators from onnxruntime-extensions.

Build with Operators Config

Also, you could pass the required operators config file by argument --include_ops_by_config to customize the operators you want to build in both onnxruntime and onnxruntime-extensions. Example content of required_operators.config are:

# Generated from model/s
# domain;opset;op1,op2...
ai.onnx;12;Add,Cast,Concat,Squeeze
ai.onnx.contrib;1;GPT2Tokenizer,

In above operators config, ai.onnx.contrib is the domain name of operators in onnxruntime-extensions. We would parse this line to generate required operators in onnxruntime-extensions for build.

Generate Operators Config

To generate the required_operators.config file from model, please follow the guidance Converting ONNX models to ORT format.

If your model contains operators from onnxruntime-extensions, please add argument --custom_op_library and pass the path to ortcustomops shared library built following guidance share library.

You could even manually edit the required_operators.config if you know the custom operators required and don't want to build the shared library.

Build and Disable Exceptions

You could add argument --disable_exceptions to disable exceptions in both onnxruntime and onnxruntime-extensions.

However, if the custom operators you used in onnxruntime-extensions (such as BlingFireTokenizer) use c++ exceptions, then you will also need to add argument --enable_wasm_exception_throwing_override to enable Emscripten to link in exception throwing support library. If this argument is not set, Emscripten will throw linking errors.

Example Build Command

D:\onnxruntime> build.bat --config Release --build_wasm --enable_wasm_threads --enable_wasm_simd --skip_tests --disable_exceptions --disable_wasm_exception_catching --enable_wasm_exception_throwing_override --disable_rtti --use_extensions --parallel --minimal_build custom_ops --include_ops_by_config D:\required_operators.config

E2E Example using Custom Operators

A common NLP task would probably contain several steps, including pre-processing, DL model and post-processing. It would be very efficient and productive to convert the pre/post processing code snippets into ONNX model since ONNX graph is actually a computation graph, and it can represent the most programming code, theoretically.

Here is an E2E NLP example to show the usage of onnxruntime-extensions:

Create E2E Model

You could use ONNX helper functions to create an ONNX model with custom operators.

import onnx
from onnx import helper

# ...
e2e_nodes = []

# tokenizer node
tokenizer_node = helper.make_node(
    'GPT2Tokenizer', # custom operator supported in onnxruntime-extensions
    inputs=['input_str'],
    outputs=['token_ids', 'attention_mask'],
    vocab=get_file_content(vocab_file),
    merges=get_file_content(merges_file),
    name='gpt2_tokenizer',
    domain='ai.onnx.contrib' # domain of custom operator
)
e2e_nodes.append(tokenizer_node)

# deep learning model
dl_model = onnx.load("dl_model.onnx")
dl_nodes = dl_model.graph.node
e2e_nodes.extend(dl_nodes)

# construct E2E ONNX graph and model
e2e_graph = helper.make_graph(
    e2e_nodes,
    'e2e_graph',
    [input_tensors],
    [output_tensors],
)
# ...

For more usage of ONNX helper, please visit the document Python API Overview.

Run E2E Model in Python

import onnxruntime as _ort
from onnxruntime_extensions import get_library_path as _lib_path

so = _ort.SessionOptions()
# register onnxruntime-extensions library
so.register_custom_ops_library(_lib_path())

# run onnxruntime session
sess = _ort.InferenceSession(e2e_model, so)
sess.run(...)

Run E2E Model in JavaScript

To run E2E ONNX model in JavaScript, you need to first prepare ONNX Runtime WebAssembly artifacts, include the generated ort.min.js, and then load and run the model in JS.

// use an async context to call onnxruntime functions
async function main() {
    try {
        // create a new session and load the e2e model
        const session = await ort.InferenceSession.create('./e2e_model.onnx');

        // prepare inputs
        const tensorA = new ort.Tensor(...);
        const tensorB = new ort.Tensor(...);

        // prepare feeds: use model input names as keys
        const feeds = { a: tensorA, b: tensorB };

        // feed inputs and run
        const results = await session.run(feeds);

        // read from results
        const dataC = results.c.data;
        document.write(`data of result tensor 'c': ${dataC}`);

    } catch (e) {
        document.write(`failed to inference ONNX model: ${e}.`);
    }
}