onnxruntime/nodejs
stevenlix 814638cdff
Cherry pick PRs to Rel-1.3.1 (#4198)
* link to folder instead of READMEs inside folder (#3938)

otherwise hard to find the source code

* [Node.js binding] fix linux build (#3927)

* [Node.js binding] add build flag for node.js binding (#3948)

* [Nodejs binding] create a new pipeline to generate signed binaries (#4104)

* add yml files

* update pipeline

* fix yaml syntax

* yaml pop BuildCSharp

* udpate yaml

* do not stage codesign summary

* fix build: pipeline Node.js version to 12.16.3 (#4145)

* [Node.js binding] upgrade node-addon-api to 3.0 (#4148)

* [Node.js binding] add linux and mac package (#4157)

* try mac pipeline

* fix path separator

* copy prebuilds folder

* split esrp yaml for win/mac

* disable mac signing temporarily

* add linux

* fix indent

* add nodetool in linux

* add nodetool in win-ci-2019

* replace linux build by custom docker scripts

* use manylinux as node 12.16 not working on centos6

* try ubuntu

* loosen timeout for test case - multiple runs calls

* add script to support update nodejs binding version (#4164)

* [java] Adds a CUDA test (#3956)

* [java] - adding a cuda enabled test.

* Adding --build_java to the windows gpu ci pipeline.

* Removing a stray line from the unit tests that always enabled CUDA for Java.

* Update OnnxRuntime.java for OS X environment. (#3985)

onnxruntime init failure due to wrong path of reading native libraries. In OS X 64 system, the arch name is detected as x86 which generates invalid path to read native libraries.

Exception java.lang.UnsatisfiedLinkError: no onnxruntime in java.library.path
	at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1867)
	at java.lang.Runtime.loadLibrary0(Runtime.java:870)
	at java.lang.System.loadLibrary(System.java:1122)
	at ai.onnxruntime.OnnxRuntime.load(OnnxRuntime.java:174)
	at ai.onnxruntime.OnnxRuntime.init(OnnxRuntime.java:81)
	at ai.onnxruntime.OrtEnvironment.<clinit>(OrtEnvironment.java:24)

* Create Java publishing pipeline (#3944)

Create CPU and GPu Java publishing pipelines. Final jars are tested on all platforms. However, signing and publishing to maven are manual steps.

* Change group id to com.microsoft.onnxruntime per requirements.

* Java GPu artifact naming (#4179)

Modify gradle build so artifactID has _gpu for GPU builds.
  Pass USE_CUDA flag on CUDA build
  Adjust publishing pipelines to extract POM from a correct path.

Co-Authored-By: @Craigacp

* bump up ORT version to 1.3.1 (#4181)

* move back to toolset 14.16 to possibly work around nvcc bug (#4180)

* Symbolic shape inference exit on models without onnx opset used (#4090)

* Symbolic shape inference exit on models without onnx opset used

* Temporary fix for ConvTranspose with symbolic input dims

Co-authored-by: Changming Sun <me@sunchangming.com>

* Fix Nuphar test failure

* Enlarge the read buffer size in C#/Java test code (#4150)

1. Enlarge the read buffer size further, so that our code can run even faster. TODO: need apply the similar changes to python some other language bindings.
2. Add coreml_VGG16_ImageNet to the test exclusion set of x86_32. It is not a new model but previously we didn't run the test against x86_32.

* Temporarily disable windows static analysis CI job

* skip model coreml_Imputer-LogisticRegression_sklearn_load_breast_cancer

* Delete unused variable

Co-authored-by: Prasanth Pulavarthi <prasantp@microsoft.com>
Co-authored-by: Yulong Wang <yulongw@microsoft.com>
Co-authored-by: Adam Pocock <adam.pocock@oracle.com>
Co-authored-by: jji2019 <49252772+jji2019@users.noreply.github.com>
Co-authored-by: Dmitri Smirnov <yuslepukhin@users.noreply.github.com>
Co-authored-by: Dmitri Smirnov <dmitrism@microsoft.com>
Co-authored-by: George Wu <jywu@microsoft.com>
Co-authored-by: KeDengMS <kedeng@microsoft.com>
Co-authored-by: Changming Sun <me@sunchangming.com>
Co-authored-by: Changming Sun <chasun@microsoft.com>
2020-06-12 11:27:02 -07:00
..
.vscode Node.js binding for ONNX Runtime (#3613) 2020-05-05 11:45:12 -07:00
examples Cherry pick PRs to Rel-1.3.1 (#4198) 2020-06-12 11:27:02 -07:00
lib [Node.js API] optimize prebuild (#3844) 2020-05-06 15:48:13 -07:00
script Cherry pick PRs to Rel-1.3.1 (#4198) 2020-06-12 11:27:02 -07:00
src Cherry pick PRs to Rel-1.3.1 (#4198) 2020-06-12 11:27:02 -07:00
test Cherry pick PRs to Rel-1.3.1 (#4198) 2020-06-12 11:27:02 -07:00
.clang-format Node.js binding for ONNX Runtime (#3613) 2020-05-05 11:45:12 -07:00
.eslintrc.js Node.js binding for ONNX Runtime (#3613) 2020-05-05 11:45:12 -07:00
.gitignore Node.js binding for ONNX Runtime (#3613) 2020-05-05 11:45:12 -07:00
.npmignore Node.js binding for ONNX Runtime (#3613) 2020-05-05 11:45:12 -07:00
CMakeLists.txt Cherry pick PRs to Rel-1.3.1 (#4198) 2020-06-12 11:27:02 -07:00
package-lock.json Cherry pick PRs to Rel-1.3.1 (#4198) 2020-06-12 11:27:02 -07:00
package.json Cherry pick PRs to Rel-1.3.1 (#4198) 2020-06-12 11:27:02 -07:00
README.md Cherry pick PRs to Rel-1.3.1 (#4198) 2020-06-12 11:27:02 -07:00
tsconfig.json Node.js binding for ONNX Runtime (#3613) 2020-05-05 11:45:12 -07:00

ONNX Runtime Node.js API

ONNX Runtime Node.js binding enables Node.js applications to run ONNX model inference.

Usage

Install the latest stable version:

npm install onnxruntime

Install the latest dev version:

npm install onnxruntime@dev

Refer to Node.js samples for samples and tutorials.

Requirements

ONNXRuntime works on Node.js v12.x+ or Electron v5.x+.

Following platforms are supported with pre-built binaries:

  • Windows x64 CPU NAPI_v3
  • Linux x64 CPU NAPI_v3
  • MacOS x64 CPU NAPI_v3

To use on platforms without pre-built binaries, you can build Node.js binding from source and consume it by npm install <onnxruntime_repo_root>/nodejs/. See also BUILD.MD for building ONNX Runtime Node.js binding locally.

License

License information can be found here.