onnxruntime/js/react_native
Edward Chen 454f77cd94
Update kernel matching logic: decouple from op schemas and remove kernel def hashes (#12791)
# Motivation
Currently, ORT minimal builds use kernel def hashes to map from nodes to
kernels to execute when loading the model. As the kernel def hashes must
be known ahead of time, this works for statically registered kernels.
This works well for the CPU EP.
For this approach to work, the kernel def hashes must also be known at
ORT format model conversion time, which means the EP with statically
registered kernels must also be enabled then. This is not an issue for
the always-available CPU EP. However, we do not want to require that any
EP which statically registers kernels is always available too.
Consequently, we explore another approach to match nodes to kernels that
does not rely on kernel def hashes. An added benefit of this is the
possibility of moving away from kernel def hashes completely, which
would eliminate the maintenance burden of keeping the hashes stable.

# Approach
In a full build, ORT uses some information from the ONNX op schema to
match a node to a kernel. We want to avoid including the ONNX op schema
in a minimal build to reduce binary size. Essentially, we take the
necessary information from the ONNX op schema and make it available in a
minimal build.
We decouple the ONNX op schema from the kernel matching logic. The
kernel matching logic instead relies on per-op information which can
either be obtained from the ONNX op schema or another source.
This per-op information must be available in a minimal build when there
are no ONNX op schemas. We put it in the ORT format model.
Existing uses of kernel def hashes to look up kernels are replaced
with the updated kernel matching logic. We no longer store
kernel def hashes in the ORT format model’s session state and runtime
optimization representations. We no longer keep the logic to
generate and ensure stability of kernel def hashes.
2022-09-20 14:24:59 -07:00
..
android Update kernel matching logic: decouple from op schemas and remove kernel def hashes (#12791) 2022-09-20 14:24:59 -07:00
e2e Update kernel matching logic: decouple from op schemas and remove kernel def hashes (#12791) 2022-09-20 14:24:59 -07:00
ios Update kernel matching logic: decouple from op schemas and remove kernel def hashes (#12791) 2022-09-20 14:24:59 -07:00
lib [js/rn] fix ORTRN for iOS (#11425) 2022-05-04 13:58:55 -07:00
scripts [js/rn] fix CI packaging for react native E2E test (#11463) 2022-05-09 18:09:52 -07:00
.gitignore [js/react_native] Create ONNX Runtime React Native pipeline (#10474) 2022-02-09 21:37:05 -08:00
app.plugin.js [js/rn] add expo config plugin support (#11556) 2022-05-25 11:55:35 -07:00
babel.config.js [js] resolve CodeQL warnings for force strict mode (#8645) 2021-08-06 19:35:43 -07:00
onnxruntime-mobile-c.podspec [js/react_native] Fix a broken manual build (#10012) 2021-12-13 19:02:10 -08:00
onnxruntime-react-native.podspec [js/rn] upgrade package react-native@^0.69.1 (#12155) 2022-07-27 15:15:45 -07:00
package.json [js/rn] upgrade package react-native@^0.69.1 (#12155) 2022-07-27 15:15:45 -07:00
README.md Replace 'master' branch ref to 'main' in the code (#12547) 2022-08-22 10:48:12 -07:00
test_types_models.readme.md Update kernel matching logic: decouple from op schemas and remove kernel def hashes (#12791) 2022-09-20 14:24:59 -07:00
tsconfig.build.json [js/react_native] Create ONNX Runtime React Native pipeline (#10474) 2022-02-09 21:37:05 -08:00
tsconfig.json [js] release pipeline for web and react native (#10656) 2022-03-01 21:38:33 -08:00
tsconfig.scripts.json ONNX Runtime React Native Library (#7564) 2021-05-11 10:34:40 -07:00
unimodule.json [js/rn] add expo config plugin support (#11556) 2022-05-25 11:55:35 -07:00
yarn.lock [js/rn] upgrade package react-native@^0.69.1 (#12155) 2022-07-27 15:15:45 -07:00

onnxruntime-react-native

ONNX Runtime React Native provides a JavaScript library for running ONNX models on React Native app.

Why ONNX models

The Open Neural Network Exchange (ONNX) is an open standard for representing machine learning models. The biggest advantage of ONNX is that it allows interoperability across different open source AI frameworks, which itself offers more flexibility for AI frameworks adoption.

Why ONNX Runtime React Native

With ONNX Runtime React Native, React Native developers can score pre-trained ONNX models directy on React Native apps by leveraging ONNX Runtime Mobile, so it provides a light-weight inference solution for Android and iOS.

Installation

yarn add onnxruntime-react-native

Usage

import { InferenceSession } from "onnxruntime-react-native";

// load a model
const session: InferenceSession = await InferenceSession.create(modelPath);
// input as InferenceSession.OnnxValueMapType
const result = session.run(input, ['num_detection:0', 'detection_classes:0'])

Refer to ONNX Runtime JavaScript examples for samples and tutorials. Different from other JavaScript frameworks like node.js and web, React Native library doesn't support these features.

  • Unsigned data type at Tensor
  • Model loading using ArrayBuffer

Operator and type support

ONNX Runtime React Native currently supports most operators used by popular models. Refer to ONNX Runtime Mobile Pacakge Operator and Type.

License

License information can be found here.