mirror of
https://github.com/saymrwulf/onnxruntime.git
synced 2026-05-16 21:00:14 +00:00
* Initial update of readme * Readme updates * Review of consolidated README (#3930) * Proposed updates for readme (#3953) I found some of the information was duplicated within the doc, so attempted to streamline * Fix links * More updates - fix build instructions - nodejs doc reorganization - roadmap update - version fixes * Update ORT Server build instructions * More doc cleanup * fix python dev notes name * Update nodejs and some links * sync eigen version back to master * Minor fixes * add nodsjs to sample table of content * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * address PR feedback * address PR feedback * nodejs build instruction * Update Java instructions to include gradle * Roadmap refresh Reformat some data, fix link, minor rewording * Clarify Visual C++ runtime req Co-authored-by: Nat Kershaw (MSFT) <nakersha@microsoft.com> Co-authored-by: Prasanth Pulavarthi <prasantp@microsoft.com> Co-authored-by: manashgoswami <magoswam@microsoft.com> |
||
|---|---|---|
| .. | ||
| c_cxx | ||
| nodejs | ||
| README.md | ||
ONNX Runtime Samples and Tutorials
Here you will find various samples, tutorials, and reference implementations for using ONNX Runtime. For a list of available dockerfiles and published images to help with getting started, see this page.
Python
Inference only
- Basic Model Inferencing (single node Sigmoid) on CPU
- Model Inferencing (Resnet50) on CPU
- Model Inferencing on CPU using ONNX-Ecosystem Docker image
- Model Inferencing on CPU using ONNX Runtime Server (SSD Single Shot MultiBox Detector)
- Model Inferencing using NUPHAR Execution Provider
Inference with model conversion
Inference and deploy through AzureML
-
Inferencing on CPU using ONNX Model Zoo models:
-
Inferencing on CPU with model conversion step for existing models:
-
Inferencing on CPU with PyTorch model training:
For aditional information on training in AzureML, please see AzureML Training Notebooks
-
Inferencing on GPU with TensorRT Execution Provider (AKS)
Inference and Deploy wtih Azure IoT Edge
Other
C#
C/C++
Java
Node.js
In each example's implementation subdirectory, run
npm install
node ./