### Description <!-- Describe your changes. --> - Create `OnnxruntimeJSIHelper` native module to provide two JSI functions - `jsiOnnxruntimeStoreArrayBuffer`: Store buffer in Blob Manager & return blob object (iOS: RCTBlobManager, Android: BlobModule) - `jsiOnnxruntimeResolveArrayBuffer`: Use blob object to get buffer - The part of implementation is reference to [react-native-blob-jsi-helper](https://github.com/mrousavy/react-native-blob-jsi-helper) - Replace base64 encode/decode - `loadModelFromBlob`: Rename from `loadModelFromBase64EncodedBuffer` - `run`: Use blob object to replace input.data & results[].data For [this context](https://github.com/microsoft/onnxruntime/issues/16031#issuecomment-1556527812), it saved a lot of time and avoid JS thread blocking in decode return type, it is 3700ms -> 5~20ms for the case. (resolve function only takes 0.x ms) ### Motivation and Context <!-- - Why is this change required? What problem does it solve? - If it fixes an open issue, please link to the issue here. --> It’s related to #16031, but not a full implementation for migrate to JSI. It just uses JSI through BlobManager to replace the slow part (base64 encode / decode). Rewriting it entirely in JSI could be complicated, like type convertion and threading. This PR might be considered a minor change. /cc @skottmckay |
||
|---|---|---|
| .. | ||
| android | ||
| e2e | ||
| ios | ||
| lib | ||
| scripts | ||
| .gitignore | ||
| app.plugin.js | ||
| babel.config.js | ||
| onnxruntime-react-native.podspec | ||
| package.json | ||
| README.md | ||
| test_types_models.readme.md | ||
| tsconfig.build.json | ||
| tsconfig.json | ||
| tsconfig.scripts.json | ||
| unimodule.json | ||
| yarn.lock | ||
onnxruntime-react-native
ONNX Runtime React Native provides a JavaScript library for running ONNX models in a React Native app.
Why ONNX models
The Open Neural Network Exchange (ONNX) is an open standard for representing machine learning models. The biggest advantage of ONNX is that it allows interoperability across different open source AI frameworks, which itself offers more flexibility for AI frameworks adoption.
Why ONNX Runtime React Native
With ONNX Runtime React Native, React Native developers can score pre-trained ONNX models directly in React Native apps by leveraging ONNX Runtime, so it provides a light-weight inference solution for Android and iOS.
Installation
yarn add onnxruntime-react-native
Usage
import { InferenceSession } from "onnxruntime-react-native";
// load a model
const session: InferenceSession = await InferenceSession.create(modelPath);
// input as InferenceSession.OnnxValueMapType
const result = session.run(input, ['num_detection:0', 'detection_classes:0'])
Refer to ONNX Runtime JavaScript examples for samples and tutorials. The ONNX Runtime React Native library does not currently support the following features:
- Tensors with unsigned data types, with the exception of uint8 on Android devices
- Model loading using ArrayBuffer
Operator and type support
ONNX Runtime React Native version 1.13 supports both ONNX and ORT format models, and includes all operators and types.
Previous ONNX Runtime React Native packages use the ONNX Runtime Mobile package, and support operators and types used in popular mobile models. See here for the list of supported operators and types.
License
License information can be found here.