* Implement multi-stage Dockerfile - Reduces image size from 2.3 GB to 1.46 GB. - Uses Ubuntu based OpenVINO image as base image leading to fewer required instructions - Does not include unnecessary build time components in deploy image * Remove wget after usage * Uninstall wget in the same RUN statement Avoids re-distributing wget package in any of the layers * Update License header according to Intel guidelines Updated the license header according to Intel corporate guidelines. * Use Ubuntu18's default Python3 Don't install Miniconda and use the default Python3 provided by the base Ubuntu 18 OS. * OpenVINO EP with CentOS7 Dockefile to build ONNX RT with OpenVINO EP with a CentOS 7 base. * Dockerfile documentation changes Updated documentation to show the latest docker image location and usage details. * updated ov-ep doc link * Temporarily disabling VAD-M due to regression * fix for vad-m daemon config setting * Revert "Temporarily disabling VAD-M due to regression" This reverts commit c503bea38397f332b220321823e0ca1c55f4aab3. VAD-M issue fixed. this is no longer needed * Revert "Revert "Temporarily disabling VAD-M due to regression"" This reverts commit 7ca53feb2ba585c050be81770698f9abae8dbe28. * Revert "fix for vad-m daemon config setting" This reverts commit 9964f8452194655c0b988bd8472da45996deca38. * Ubuntu Dockerfile update w.r.t 2021.4 This dockerfile uses openvino 2021.4 runtime base image from OpenVINO. uses onnxruntime 1.8 release branch to generate the image. Added fix for VADM HDDL Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com> * Added new dependency in deploy stage Added sources for all the dependency packages of unattended-upgrades package which had GPL license into deploy stage. Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com> * Updated CentOS Dockerfile to the latest 2021.4 -Dockerfile updated -VADM Fix added Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com> * Updated c# openvino dockerfile w.r.t 2021.4 Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com> * Updated the ubuntu dockefile branch and repo Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com> * Updated Dockerfile Documentation w.r.t 2021.4 Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com> * Updated GCC version to 10 for centos dockerfile Signed-off-by: MaajidKhan <n.maajidkhan@gmail.com> Co-authored-by: S. Manohar Karlapalem <manohar.karlapalem@intel.com> |
||
|---|---|---|
| .gdn | ||
| .github | ||
| cgmanifests | ||
| cmake | ||
| csharp | ||
| dockerfiles | ||
| docs | ||
| include/onnxruntime/core | ||
| java | ||
| js | ||
| objectivec | ||
| onnxruntime | ||
| orttraining | ||
| package/rpm | ||
| samples | ||
| server | ||
| tools | ||
| winml | ||
| .clang-format | ||
| .clang-tidy | ||
| .dockerignore | ||
| .flake8 | ||
| .gitattributes | ||
| .gitignore | ||
| .gitmodules | ||
| build.amd64.1411.bat | ||
| build.bat | ||
| build.sh | ||
| CODEOWNERS | ||
| CONTRIBUTING.md | ||
| LICENSE | ||
| NuGet.config | ||
| ort.wprp | ||
| packages.config | ||
| README.md | ||
| requirements-dev.txt | ||
| requirements-doc.txt | ||
| requirements-training.txt | ||
| requirements.txt.in | ||
| setup.py | ||
| ThirdPartyNotices.txt | ||
| VERSION_NUMBER | ||

ONNX Runtime is a cross-platform inference and training machine-learning accelerator.
ONNX Runtime inference can enable faster customer experiences and lower costs, supporting models from deep learning frameworks such as PyTorch and TensorFlow/Keras as well as classical machine learning libraries such as scikit-learn, LightGBM, XGBoost, etc. ONNX Runtime is compatible with different hardware, drivers, and operating systems, and provides optimal performance by leveraging hardware accelerators where applicable alongside graph optimizations and transforms. Learn more →
ONNX Runtime training can accelerate the model training time on multi-node NVIDIA GPUs for transformer models with a one-line addition for existing PyTorch training scripts. Learn more →
Get Started
General Information: onnxruntime.ai
Usage documention and tutorials: onnxruntime.ai/docs
Companion sample repositories:
- ONNX Runtime Inferencing: microsoft/onnxruntime-inference-examples
- ONNX Runtime Training: microsoft/onnxruntime-training-examples
Build Pipeline Status
| System | CPU | GPU | EPs |
|---|---|---|---|
| Windows | |||
| Linux | |||
| Mac | |||
| Android | |||
| iOS | |||
| WebAssembly |
Data/Telemetry
Windows distributions of this project may collect usage data and send it to Microsoft to help improve our products and services. See the privacy statement for more details.
Contributions and Feedback
We welcome contributions! Please see the contribution guidelines.
For feature requests or bug reports, please file a GitHub Issue.
For general discussion or questions, please use Github Discussions.
Code of Conduct
This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.
License
This project is licensed under the MIT License.