onnxruntime/js/web
Yi Zhang 8e8840f1de
Enable Web CI on Linux (#16419)
### Description
1. Enable Web ci on Linux

### Motivation and Context
1. speed up web ci, the duration can be reduced from 160 minutes to 130
minutes, a time saving of 20% could be be achieved.
The total computation time is 455 minutes now. Moved to Linux, it could
be reduced to 336 minutes.
2. It's the first step to enable compilation cache for emscripten
3. per Yulong's request, build_web stages are still using windows pool


![image](https://github.com/microsoft/onnxruntime/assets/16190118/c9496408-74bd-45ea-b4ae-a4dd2a574d17)


https://dev.azure.com/onnxruntime/onnxruntime/_build/results?buildId=1038382&view=results
2023-06-22 15:42:58 +08:00
..
docs [js/web] Added Reduce operators support (#16122) 2023-06-12 07:46:27 -07:00
lib [js/web] update webgl context creating (#16436) 2023-06-21 17:10:26 -07:00
script [js] add API that allows to get package version (#16207) 2023-06-09 16:18:53 -07:00
test Enable Web CI on Linux (#16419) 2023-06-22 15:42:58 +08:00
.gitignore [js/web] add target ort.webgpu.min.js (#15780) 2023-05-04 10:05:39 -07:00
.npmignore [js/web] add target ort.webgpu.min.js (#15780) 2023-05-04 10:05:39 -07:00
karma.conf.js [js/webgpu] run test on chrome instead of chrome canary for webgpu (#15902) 2023-05-12 15:47:59 -07:00
package-lock.json Bump socket.io-parser from 4.2.2 to 4.2.3 in /js/web (#16068) 2023-05-31 02:15:23 +00:00
package.json make package.json more rebost (#16366) 2023-06-15 14:17:37 -07:00
README.md [js/webgpu] generate operator table for webgpu (#15954) 2023-05-20 12:20:41 -07:00
tsconfig.json [js/common] allow import onnxruntime-common as ESM and CJS (#15772) 2023-06-12 12:05:11 -07:00
types.d.ts [js/web] add target ort.webgpu.min.js (#15780) 2023-05-04 10:05:39 -07:00
webpack.config.js [js/web] disable node fallback in webpack (#16166) 2023-05-31 16:47:00 -07:00

ONNX Runtime Web

ONNX Runtime Web is a Javascript library for running ONNX models on browsers and on Node.js.

ONNX Runtime Web has adopted WebAssembly and WebGL technologies for providing an optimized ONNX model inference runtime for both CPUs and GPUs.

Why ONNX models

The Open Neural Network Exchange (ONNX) is an open standard for representing machine learning models. The biggest advantage of ONNX is that it allows interoperability across different open source AI frameworks, which itself offers more flexibility for AI frameworks adoption.

Why ONNX Runtime Web

With ONNX Runtime Web, web developers can score models directly on browsers with various benefits including reducing server-client communication and protecting user privacy, as well as offering install-free and cross-platform in-browser ML experience.

ONNX Runtime Web can run on both CPU and GPU. On CPU side, WebAssembly is adopted to execute the model at near-native speed. ONNX Runtime Web complies the native ONNX Runtime CPU engine into WebAssembly backend by using Emscripten, so it supports most functionalities native ONNX Runtime offers, including full ONNX operator coverage, multi-threading, ONNX Runtime Quantization as well as ONNX Runtime Mobile. For performance acceleration with GPUs, ONNX Runtime Web leverages WebGL, a popular standard for accessing GPU capabilities. We are keeping improving op coverage and optimizing performance in WebGL backend.

See Compatibility and Operators Supported for a list of platforms and operators ONNX Runtime Web currently supports.

Usage

Refer to ONNX Runtime JavaScript examples for samples and tutorials.

Documents

Developement

Refer to the following links for development information:

Compatibility

OS/Browser Chrome Edge Safari Electron Node.js
Windows 10 wasm, webgl wasm, webgl - wasm, webgl wasm
macOS wasm, webgl wasm, webgl wasm, webgl wasm, webgl wasm
Ubuntu LTS 18.04 wasm, webgl wasm, webgl - wasm, webgl wasm
iOS wasm, webgl wasm, webgl wasm, webgl - -
Android wasm, webgl wasm, webgl - - -

Operators

WebAssembly backend

ONNX Runtime Web currently support all operators in ai.onnx and ai.onnx.ml.

WebGL backend

ONNX Runtime Web currently supports a subset of operators in ai.onnx operator set. See webgl-operators.md for a complete, detailed list of which ONNX operators are supported by WebGL backend.

WebGPU backend

WebGPU backend is still an experimental feature. See webgpu-operators.md for a detailed list of which ONNX operators are supported by WebGPU backend.

License

License information can be found here.