mirror of
https://github.com/saymrwulf/onnxruntime.git
synced 2026-05-14 20:48:00 +00:00
* Add examples * fix build instructions for linux users * fix header include * update documentation
5.3 KiB
5.3 KiB
ONNX Runtime Samples and Tutorials
Here you will find various samples, tutorials, and reference implementations for using ONNX Runtime. For a list of available dockerfiles and published images to help with getting started, see this page.
General
Integrations
- Azure Machine Learning
- Azure IoT Edge
- Azure Media Services
- Azure SQL Edge and Managed Instance
- Windows Machine Learning
- ML.NET
Python
Inference only
- CPU: Basic
- CPU: Resnet50
- ONNX-Ecosystem Docker image
- ONNX Runtime Server: SSD Single Shot MultiBox Detector
- NUPHAR EP samples
Inference with model conversion
Other
C#
C/C++
- C: SqueezeNet
- C++: model-explorer - single and batch processing
- C++: SqueezeNet
- C++: MNIST
Java
Node.js
Samples
In each sample's implementation subdirectory, run
npm install
node ./
-
Basic Usage - a demonstration of basic usage of ONNX Runtime Node.js binding.
-
Create Tensor - a demonstration of basic usage of creating tensors.
- Create InferenceSession - shows how to create
InferenceSessionin different ways.
Azure Machine Learning
Inference and deploy through AzureML
For aditional information on training in AzureML, please see AzureML Training Notebooks
- Inferencing on CPU using ONNX Model Zoo models:
- Inferencing on CPU with PyTorch model training:
- Inferencing on CPU with model conversion for existing (CoreML) model:
- Inferencing on GPU with TensorRT Execution Provider (AKS):
Azure IoT Edge
Inference and Deploy with Azure IoT Edge
Azure Media Services
Azure SQL
Deploy ONNX model in Azure SQL Edge
Windows Machine Learning
Examples of inferencing with ONNX Runtime through Windows Machine Learning