onnxruntime/samples
gwang-msft 9e0f5fc7af
The initial PR for NNAPI EP (#4287)
* Move nnapi dnnlib to subfolder

* dnnlib compile settings

* add nnapi buildin build.py

* add onnxruntime_USE_NNAPI_BUILTIN

* compile using onnxruntime_USE_NNAPI_BUILTIN

* remove dnnlib from built in code

* Group onnxruntime_USE_NNAPI_BUILTIN sources

* add file stubs

* java 32bit compile error

* built in nnapi support 5-26

* init working version

* initializer support

* fix crash on free execution

* add dynamic input support

* bug fixes for dynamic input shape, add mul support, working on conv and batchnorm

* Add batchnormalization, add overflow check for int64 attributes

* add global average/max pool and reshape

* minor changes

* minor changes

* add skip relu and options to use different type of memory

* small bug fix for in operator relu

* bug fix for nnapi

* add transpose support, minor bug fix

* Add transpose support

* minor bug fixes, depthwise conv weight fix

* fixed the bug where the onnx model input has mismatch order than the nnapi model input

* add helper to add scalar operand

* add separated opbuilder to handle single operator

* add cast operator

* fixed reshape, moved some logs to verbose

* Add softmax and identity support, change shaper calling signature, and add support for int32 output

* changed the way to execute the NNAPI

* move NNMemory and InputOutputInfo into Model class

* add limited support for input dynamic shape

* add gemm support, fixed crash when allocating big array on stack

* add abs/exp/floor/log/sigmoid/neg/sin/sqrt/tanh support

* better dynamic input shape support;

* add more check for IsOpSupportedImpl, refactored some code

* some code style fix, switch to safeint

* Move opbuilders to a map with single instance, minor bug fixes

* add GetUniqueName for new temp tensors

* change from throw std to ort_throw

* build settings change and 3rd party notice update

* add readme for nnapi_lib, move to ort log, add comments to public functions, clean the code

* add android log sink and more logging changes, add new string for NnApiErrorDescription

* add nnapi execution options/fp16 relax

* fix a dnnlibrary build break

* addressed review comments

* address review comments, changed adding output for subgraph in NnapiExecutionProvider::GetCapability, minor issue fixes

* formatting in build.py

* more formatting fix in build.py, return fail status instead of throw in compute_func

* moved android_log_sink to platform folder, minor coding style changes

* addressed review comments
2020-06-26 00:02:39 -07:00
..
c_cxx The initial PR for NNAPI EP (#4287) 2020-06-26 00:02:39 -07:00
nodejs Doc Updates for Build (#3976) 2020-05-18 20:08:36 -07:00
swift Add Swift/macOS sample, a port of the Windows MNist sample 2020-06-05 21:16:41 -07:00
README.md Sample updates (#4303) 2020-06-25 16:09:17 -07:00

ONNX Runtime Samples and Tutorials

Here you will find various samples, tutorials, and reference implementations for using ONNX Runtime. For a list of available dockerfiles and published images to help with getting started, see this page.

General

Integrations


Python

Inference only

Inference with model conversion

Other

C#

C/C++

Java

Node.js

Samples

In each sample's implementation subdirectory, run

npm install
node ./
  • Basic Usage - a demonstration of basic usage of ONNX Runtime Node.js binding.

  • Create Tensor - a demonstration of basic usage of creating tensors.


Azure Machine Learning

Inference and deploy through AzureML

For aditional information on training in AzureML, please see AzureML Training Notebooks

Azure IoT Edge

Inference and Deploy with Azure IoT Edge

Azure Media Services

Video Analysis through Azure Media Services using using Yolov3 to build an IoT Edge module for object detection

Azure SQL

Deploy ONNX model in Azure SQL Edge

Windows Machine Learning

Examples of inferencing with ONNX Runtime through Windows Machine Learning

ML.NET

Object Detection with ONNX Runtime in ML.NET