mirror of
https://github.com/saymrwulf/onnxruntime.git
synced 2026-05-14 20:48:00 +00:00
* allow custom op taking varied types * refactor test case * add test model * refactor test case * enable copy elision * update test case * fix issue in ToString function
2.2 KiB
2.2 KiB
Adding a new op
A new op can be written and registered with ONNXRuntime in the following 3 ways
1. Using the custom op API in the C/C++ APIs (onnxruntime_c_api.h)
- Create an OrtCustomOpDomain with the domain name used by the custom ops
- Create an OrtCustomOp structure for each op and add them to the OrtCustomOpDomain with OrtCustomOpDomain_Add
- Call OrtAddCustomOpDomain to add the custom domain of ops to the session options
See this for examples of MyCustomOp and SliceCustomOp that use the C++ helper API (onnxruntime_cxx_api.h).
You can also compile the custom ops into a shared library and use that to run a model via the C++ API. The same test file contains an example.
The source code for a sample custom op shared library containing two custom kernels is here.
See this for an example called testRegisterCustomOpsLibrary that uses the Python API
to register a shared library that contains custom op kernels.
Currently, the only supported Execution Providers (EPs) for custom ops registered via this approach are the
CUDAand theCPUEPs.
Note that when a model being inferred on gpu, onnxruntime will insert MemcpyToHost op before a cpu custom op and append MemcpyFromHost after to make sure tensor(s) are accessible throughout calling, meaning there are no extra efforts required from custom op developer for the case.
2. Using RegisterCustomRegistry API
- Implement your kernel and schema (if required) using the OpKernel and OpSchema APIs (headers are in the include folder).
- Create a CustomRegistry object and register your kernel and schema with this registry.
- Register the custom registry with ONNXRuntime using RegisterCustomRegistry API.
See this for an example.
3. Contributing the op to ONNXRuntime
This is mostly meant for ops that are in the process of being proposed to ONNX. This way you don't have to wait for an approval from the ONNX team if the op is required in production today. See this for an example.