mirror of
https://github.com/saymrwulf/onnxruntime.git
synced 2026-05-14 20:48:00 +00:00
* Remove nGraph Execution Provider Pursuant to nGraph deprecation notice: https://github.com/microsoft/onnxruntime/blob/master/docs/execution_providers/nGraph-ExecutionProvider.md#deprecation-notice **Deprecation Notice** | | | | --- | --- | | Deprecation Begins | June 1, 2020 | | Removal Date | December 1, 2020 | Starting with the OpenVINO™ toolkit 2020.2 release, all of the features previously available through nGraph have been merged into the OpenVINO™ toolkit. As a result, all the features previously available through ONNX RT Execution Provider for nGraph have been merged with ONNX RT Execution Provider for OpenVINO™ toolkit. Therefore, ONNX RT Execution Provider for **nGraph** will be deprecated starting June 1, 2020 and will be completely removed on December 1, 2020. Users are recommended to migrate to the ONNX RT Execution Provider for OpenVINO™ toolkit as the unified solution for all AI inferencing on Intel® hardware. * Remove nGraph Licence info from ThirdPartyNotices.txt * Use simple Test.Run() for tests without EP exclusions To be consistent with rest of test code. * Remove nGraph EP functions from Java code |
||
|---|---|---|
| .. | ||
| c_cxx | ||
| iOS | ||
| nodejs | ||
| python | ||
| swift | ||
| .gitignore | ||
| README.md | ||
ONNX Runtime Samples and Tutorials
Here you will find various samples, tutorials, and reference implementations for using ONNX Runtime. For a list of available dockerfiles and published images to help with getting started, see this page.
General
Integrations
- Azure Machine Learning
- Azure IoT Edge
- Azure Media Services
- Azure SQL Edge and Managed Instance
- Windows Machine Learning
- ML.NET
- Huggingface
General
Python
Inference only
- Basic
- Resnet50
- ONNX-Ecosystem Docker image samples
- ONNX Runtime Server: SSD Single Shot MultiBox Detector
- NUPHAR EP samples
Inference with model conversion
- SKL tutorials
- Keras - Basic
- SSD Mobilenet (Tensorflow)
- BERT-SQuAD (PyTorch) on CPU
- BERT-SQuAD (PyTorch) on GPU
- BERT-SQuAD (Keras)
- BERT-SQuAD (Tensorflow)
- GPT2 (PyTorch)
- EfficientDet (Tensorflow)
- EfficientNet-Edge (Tensorflow)
- EfficientNet-Lite (Tensorflow)
- EfficientNet(Keras)
- MNIST (Keras)
Quantization
Other
C#
C/C++
- C: SqueezeNet
- C++: model-explorer - single and batch processing
- C++: SqueezeNet
- C++: MNIST
Java
Node.js
Integrations
Azure Machine Learning
Inference and deploy through AzureML
For aditional information on training in AzureML, please see AzureML Training Notebooks
- Inferencing on CPU using ONNX Model Zoo models:
- Inferencing on CPU with PyTorch model training:
- Inferencing on CPU with model conversion for existing (CoreML) model:
- Inferencing on GPU with TensorRT Execution Provider (AKS):
Azure IoT Edge
Inference and Deploy with Azure IoT Edge
Azure Media Services
Azure SQL
Deploy ONNX model in Azure SQL Edge
Windows Machine Learning
Examples of inferencing with ONNX Runtime through Windows Machine Learning
ML.NET
Object Detection with ONNX Runtime in ML.NET