diff --git a/BUILD.md b/BUILD.md index e9a024969e..1532bb59fd 100644 --- a/BUILD.md +++ b/BUILD.md @@ -8,6 +8,8 @@ * [Supported architectures and build environments](#supported-architectures-and-build-environments) * [Common Build Instructions](#common-build-instructions) * Additional Build Instructions - complete list: `./build.sh (or .\build.bat) --help` + * [Reduced Operator Kernel Build](#Reduced-Operator-Kernel-Build) + * [ONNX Runtime for Mobile Platforms](#ONNX-Runtime-for-Mobile-Platforms) * [ONNX Runtime Server (Linux)](#Build-ONNX-Runtime-Server-on-Linux) * Execution Providers * [NVIDIA CUDA](#CUDA) @@ -143,6 +145,11 @@ GCC 4.x and below are not supported. |**Node.js**|--build_nodejs|Build Node.js binding. Implies `--build_shared_lib`| --- +## Reduced Operator Kernel Build +Reduced Operator Kernel builds allow you to customize the kernels in the build to provide smaller binary sizes - [see instructions](./docs/Reduced_Operator_Kernel_build.md). + +## ONNX Runtime for Mobile Platforms +For builds compatible with mobile platforms, see more details in [ONNX_Runtime_for_Mobile_Platforms.md](./docs/ONNX_Runtime_for_Mobile_Platforms.md). Android and iOS build instructions can be found below on this page - [Android](#Android), [iOS](#iOS) ## Build ONNX Runtime Server on Linux Read more about ONNX Runtime Server [here](./docs/ONNX_Runtime_Server_Usage.md). diff --git a/README.md b/README.md index ae3f35bf28..02c8a06403 100644 --- a/README.md +++ b/README.md @@ -112,7 +112,7 @@ The following are required for usage of the official published packages. * Default GPU (CUDA) * The default GPU build requires CUDA runtime libraries being installed on the system: - * Version: **CUDA 10.1** and **cuDNN 7.6.5** + * Version: **CUDA 10.2** and **cuDNN 8.0.3** * Version dependencies from older ONNX Runtime releases can be found in [prior release notes](https://github.com/microsoft/onnxruntime/releases). ### Build from Source @@ -145,7 +145,7 @@ For production scenarios, it's strongly recommended to build only from an [offic |CPU|GPU|IoT/Edge/Mobile|Other| |---|---|---|---| -||||| +||||| * [Roadmap: Upcoming accelerators](./docs/Roadmap.md#accelerators-and-execution-providers) * [Extensibility: Add an execution provider](docs/AddingExecutionProvider.md) diff --git a/docs/Privacy.md b/docs/Privacy.md index ba3fd462c6..034a892ac3 100644 --- a/docs/Privacy.md +++ b/docs/Privacy.md @@ -6,12 +6,12 @@ The software may collect information about you and your use of the software and *** ### Private Builds -No data collection is performed when using your private builds. +No data collection is performed when using your private builds built from source code. ### Official Builds ONNX Runtime does not maintain any independent telemetry collection mechanisms outside of what is provided by the platforms it supports. However, where applicable, ONNX Runtime will take advantage of platform-supported telemetry systems to collect trace events with the goal of improving product quality. -Currently telemetry is only implemented for Windows builds and is turned **ON** by default in the official builds distributed on Nuget.org. This may be expanded to cover other platforms in the future. Data collection is implemented via 'Platform Telemetry' per vendor platform providers (see telemetry.h). +Currently telemetry is only implemented for Windows builds and is turned **ON** by default in the official builds distributed in their respective package management repositories ([see here](../README.md#binaries)). This may be expanded to cover other platforms in the future. Data collection is implemented via 'Platform Telemetry' per vendor platform providers (see [telemetry.h](../onnxruntime/core/platform/telemetry.h)). #### Technical Details The Windows provider uses the [TraceLogging](https://docs.microsoft.com/en-us/windows/win32/tracelogging/trace-logging-about) API for its implementation. This enables ONNX Runtime trace events to be collected by the operating system, and based on user consent, this data may be periodically sent to Microsoft servers following GDPR and privacy regulations for anonymity and data access controls. diff --git a/docs/Roadmap.md b/docs/Roadmap.md index 305fe883a7..eda0ffb572 100644 --- a/docs/Roadmap.md +++ b/docs/Roadmap.md @@ -46,10 +46,9 @@ Supported * Windows 7+ * Linux (various) * Mac OS X -* Android (community contribution, Preview) +* Android (Preview) +* iOS (Preview) -*Future* -* *iOS* #### Languages Supported languages are listed in [API Documentation](../README.md#api-documentation). The core team is not actively working on other language bindings at this time. If there is a missing API, please file a request in [Issues](https://github.com/microsoft/onnxruntime/issues). Community contributions are welcome for other languages. @@ -58,10 +57,7 @@ Supported languages are listed in [API Documentation](../README.md#api-documenta #### New EPs To achieve the best performance on a growing set of compute targets across cloud and the intelligent edge, we invest in and partner with hardware partners and community members to add new execution providers. The flexible pluggability of ONNX Runtime is critical to support a broad range of scenarios and compute options. -Supported - -Supported EPs are listed [here](../README.md#supported-accelerators). Upcoming EPs include: -* Xilinx FPGA +Supported EPs are listed [here](../README.md#supported-accelerators). #### CUDA operator coverage @@ -74,7 +70,6 @@ In addition to new execution providers, we aim to make it easy for community par Performance is a key focus for ONNX Runtime. From latency to memory utilization to CPU usage, we are constantly seeking strategies to deliver the best performance. Although DNNs are rapidly driving research areas for innovation, we acknowledge that in practice, many companies and developers are still using traditional ML frameworks for reasons ranging from expertise to privacy to legality. As such, ONNX Runtime is focused on improvements and support for both DNNs and traditional ML. #### Examples of projects the team is working on: -* Improvements to batch processing for scikit-learn models * More quantization support * Improved multithreading (e.g. smarter work sharding, user supplied thread pools, etc) * Graph optimizations diff --git a/docs/Versioning.md b/docs/Versioning.md index 794ddf6850..2b5b8e9f49 100644 --- a/docs/Versioning.md +++ b/docs/Versioning.md @@ -15,18 +15,18 @@ See [Release Management](ReleaseManagement.md) ## Backwards compatibility All versions of ONNX Runtime will support ONNX opsets all the way back to (and including) opset version 7. -In other words if an ONNX Runtime release implements ONNX opset ver 9, it'll be able to run all +In other words, if an ONNX Runtime release implements ONNX opset ver 9, it'll be able to run all models that are stamped with ONNX opset versions in the range [7-9]. ### Version matrix The following table summarizes the relationship between the ONNX Runtime version and the ONNX opset version implemented in that release. -Please note the backward compatibility notes above.. +Please note the backward compatibility notes above. For more details on ONNX Release versions, see [this page](https://github.com/onnx/onnx/blob/master/docs/Versioning.md). | ONNX Runtime release version | ONNX release version | ONNX opset version | ONNX ML opset version | Supported ONNX IR version | [Windows ML Availability](https://docs.microsoft.com/en-us/windows/ai/windows-ml/release-notes/)| |------------------------------|--------------------|--------------------|----------------------|------------------|------------------| -| 1.5.1 | **1.7** down to 1.2 | 12 | 2 | 6 | Windows AI 1.4+ | +| 1.5.1 | **1.7** down to 1.2 | 12 | 2 | 6 | Windows AI 1.5+ | | 1.4.0 | **1.7** down to 1.2 | 12 | 2 | 6 | Windows AI 1.4+ | | 1.3.1 | **1.7** down to 1.2 | 12 | 2 | 6 | Windows AI 1.4+ | | 1.3.0 | **1.7** down to 1.2 | 12 | 2 | 6 | Windows AI 1.3+ |