mirror of
https://github.com/saymrwulf/onnxruntime.git
synced 2026-05-14 20:48:00 +00:00
83 lines
4.5 KiB
Markdown
83 lines
4.5 KiB
Markdown
# ONNX Runtime Samples and Tutorials
|
|
|
|
Here you will find various samples, tutorials, and reference implementations for using ONNX Runtime.
|
|
For a list of available dockerfiles and published images to help with getting started, see [this page](../dockerfiles/README.md).
|
|
|
|
* [Python](#Python)
|
|
* [C#](#C)
|
|
* [C/C++](#CC)
|
|
* [Java](#Java)
|
|
* [Node.js](#Nodejs)
|
|
***
|
|
|
|
## Python
|
|
**Inference only**
|
|
* [Basic Model Inferencing (single node Sigmoid) on CPU](https://github.com/onnx/onnx-docker/blob/master/onnx-ecosystem/inference_demos/simple_onnxruntime_inference.ipynb)
|
|
* [Model Inferencing (Resnet50) on CPU](https://github.com/onnx/onnx-docker/blob/master/onnx-ecosystem/inference_demos/resnet50_modelzoo_onnxruntime_inference.ipynb)
|
|
* [Model Inferencing on CPU](https://github.com/onnx/onnx-docker/tree/master/onnx-ecosystem/inference_demos) using [ONNX-Ecosystem Docker image](https://github.com/onnx/onnx-docker/tree/master/onnx-ecosystem)
|
|
* [Model Inferencing on CPU using ONNX Runtime Server (SSD Single Shot MultiBox Detector)](https://github.com/onnx/tutorials/blob/master/tutorials/OnnxRuntimeServerSSDModel.ipynb)
|
|
* [Model Inferencing using NUPHAR Execution Provider](../docs/python/notebooks/onnxruntime-nuphar-tutorial.ipynb)
|
|
|
|
**Inference with model conversion**
|
|
* [SKL Pipeline: Train, Convert, and Inference](https://microsoft.github.io/onnxruntime/python/tutorial.html)
|
|
* [Keras: Convert and Inference](https://microsoft.github.io/onnxruntime/python/auto_examples/plot_dl_keras.html#sphx-glr-auto-examples-plot-dl-keras-py)
|
|
|
|
**Inference and deploy through AzureML**
|
|
* Inferencing on CPU using [ONNX Model Zoo](https://github.com/onnx/models) models:
|
|
* [Facial Expression Recognition](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/deployment/onnx/onnx-inference-facial-expression-recognition-deploy.ipynb)
|
|
* [MNIST Handwritten Digits](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/deployment/onnx/onnx-inference-mnist-deploy.ipynb)
|
|
* [Resnet50 Image Classification](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/deployment/onnx/onnx-modelzoo-aml-deploy-resnet50.ipynb)
|
|
* Inferencing on CPU with model conversion step for existing models:
|
|
* [TinyYolo](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/deployment/onnx/onnx-convert-aml-deploy-tinyyolo.ipynb)
|
|
* Inferencing on CPU with PyTorch model training:
|
|
* [MNIST](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/deployment/onnx/onnx-train-pytorch-aml-deploy-mnist.ipynb)
|
|
|
|
*For aditional information on training in AzureML, please see [AzureML Training Notebooks](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/training)*
|
|
|
|
* Inferencing on GPU with TensorRT Execution Provider (AKS)
|
|
* [FER+](../docs/python/notebooks/onnx-inference-byoc-gpu-cpu-aks.ipynb)
|
|
|
|
**Inference and Deploy wtih Azure IoT Edge**
|
|
* [Intel OpenVINO](http://aka.ms/onnxruntime-openvino)
|
|
* [NVIDIA TensorRT on Jetson Nano (ARM64)](http://aka.ms/onnxruntime-arm64)
|
|
* [ONNX Runtime with Azure ML](https://github.com/Azure-Samples/onnxruntime-iot-edge/blob/master/AzureML-OpenVINO/README.md)
|
|
|
|
**Other**
|
|
* [Running ONNX model tests](./docs/Model_Test.md)
|
|
* [Common Errors with explanations](https://microsoft.github.io/onnxruntime/python/auto_examples/plot_common_errors.html#sphx-glr-auto-examples-plot-common-errors-py)
|
|
|
|
## C#
|
|
* [Inferencing Tutorial](../docs/CSharp_API.md#getting-started)
|
|
|
|
## C/C++
|
|
* [C - Inferencing (SqueezeNet)](../csharp/test/Microsoft.ML.OnnxRuntime.EndToEndTests.Capi/C_Api_Sample.cpp)
|
|
* [C++ - Inferencing (SqueezeNet)](../csharp/test/Microsoft.ML.OnnxRuntime.EndToEndTests.Capi/CXX_Api_Sample.cpp)
|
|
* [C++ - Inferencing (MNIST)](./c_cxx/MNIST)
|
|
|
|
## Java
|
|
* [Inference Tutorial](../docs/Java_API.md#getting-started)
|
|
* [MNIST inference](../java/src/test/java/sample/ScoreMNIST.java)
|
|
|
|
## Node.js
|
|
This section contains several samples that demonstrate how to use onnxruntime Node.js binding.
|
|
|
|
### Samples
|
|
|
|
* [Basic Usage](./nodejs/01_basic-usage/) - a demonstration of basic usage of ONNX Runtime Node.js binding.
|
|
|
|
* [Create Tensor](./nodejs/02_create-tensor/) - a demonstration of basic usage of creating tensors.
|
|
|
|
<!--
|
|
* [Create Tensor (Advanced)](./nodejs/03_create-tensor-advanced/) - a demonstration of advanced usage of creating tensors.
|
|
-->
|
|
|
|
* [Create InferenceSession](./nodejs/04_create-inference-session/) - shows how to create `InferenceSession` in different ways.
|
|
|
|
### Usage
|
|
|
|
In each sample's implementation subdirectory, run
|
|
|
|
```
|
|
npm install
|
|
node ./
|
|
```
|