Fix broken and outdated links in documentation (#14092)

### Description
<!-- Describe your changes. -->

I fixed some broken links in the C API documentation, but then did a
quick pass over all of the links I could find and then fixed those.

### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->

I got some 404's when exploring the documentation and wanted to fix it.
This commit is contained in:
James Yuzawa 2023-02-23 13:48:04 -05:00 committed by GitHub
parent 16b39e5b87
commit d925055a3e
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
12 changed files with 19 additions and 19 deletions

View file

@ -58,7 +58,7 @@ namespace Microsoft.ML.OnnxRuntime
/// <summary>
/// Updates the configuration knobs of OrtTensorRTProviderOptions that will eventually be used to configure a TensorRT EP
/// Please refer to the following on different key/value pairs to configure a TensorRT EP and their meaning:
/// https://www.onnxruntime.ai/docs/reference/execution-providers/TensorRT-ExecutionProvider.html
/// https://onnxruntime.ai/docs/execution-providers/TensorRT-ExecutionProvider.html
/// </summary>
/// <param name="providerOptions">key/value pairs used to configure a TensorRT Execution Provider</param>
public void UpdateOptions(Dictionary<string, string> providerOptions)
@ -169,7 +169,7 @@ namespace Microsoft.ML.OnnxRuntime
/// <summary>
/// Updates the configuration knobs of OrtCUDAProviderOptions that will eventually be used to configure a CUDA EP
/// Please refer to the following on different key/value pairs to configure a CUDA EP and their meaning:
/// https://www.onnxruntime.ai/docs/reference/execution-providers/CUDA-ExecutionProvider.html
/// https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html
/// </summary>
/// <param name="providerOptions">key/value pairs used to configure a CUDA Execution Provider</param>
public void UpdateOptions(Dictionary<string, string> providerOptions)

View file

@ -89,7 +89,7 @@ git submodule update --init
### **1. Using pre-built container images for Python API**
The unified container image from [Dockerhub](https://hub.docker.com/repository/docker/openvino/onnxruntime_ep_ubuntu20) can be used to run an application on any of the target accelerators. In order to select the target accelerator, the application should explicitly specifiy the choice using the `device_type` configuration option for OpenVINO Execution provider. Refer to [OpenVINO EP runtime configuration documentation](https://www.onnxruntime.ai/docs/reference/execution-providers/OpenVINO-ExecutionProvider.html#summary-of-options) for details on specifying this option in the application code.
The unified container image from [Dockerhub](https://hub.docker.com/repository/docker/openvino/onnxruntime_ep_ubuntu20) can be used to run an application on any of the target accelerators. In order to select the target accelerator, the application should explicitly specify the choice using the `device_type` configuration option for OpenVINO Execution provider. Refer to [OpenVINO EP runtime configuration documentation](https://onnxruntime.ai/docs/execution-providers/OpenVINO-ExecutionProvider.html#configuration-options) for details on specifying this option in the application code.
If the `device_type` runtime config option is not explicitly specified, CPU will be chosen as the hardware target execution.
### **2. Building from Dockerfile**

View file

@ -24,7 +24,7 @@ ai.onnx.contrib;1;GPT2Tokenizer,
In above operators config, `ai.onnx.contrib` is the domain name of operators in onnxruntime-extensions. We would parse this line to generate required operators in onnxruntime-extensions for build.
### Generate Operators Config
To generate the **required_operators.config** file from model, please follow the guidance [Converting ONNX models to ORT format](https://onnxruntime.ai/docs/how-to/mobile/model-conversion.html).
To generate the **required_operators.config** file from model, please follow the guidance [Converting ONNX models to ORT format](https://onnxruntime.ai/docs/reference/ort-format-models.html#convert-onnx-models-to-ort-format).
If your model contains operators from onnxruntime-extensions, please add argument `--custom_op_library` and pass the path to **ortcustomops** shared library built following guidance [share library](https://github.com/microsoft/onnxruntime-extensions#the-share-library-for-non-python).

View file

@ -910,7 +910,7 @@ struct OrtApi {
/** \brief Set the optimization level to apply when loading a graph
*
* Please see https://www.onnxruntime.ai/docs/resources/graph-optimizations.html for an in-depth explanation
* Please see https://onnxruntime.ai/docs/performance/graph-optimizations.html for an in-depth explanation
* \param[in,out] options The session options object
* \param[in] graph_optimization_level The optimization level
*
@ -2335,7 +2335,7 @@ struct OrtApi {
* Lifetime of the created allocator will be valid for the duration of the environment.
* Returns an error if an allocator with the same ::OrtMemoryInfo is already registered.
*
* See https://onnxruntime.ai/docs/reference/api/c-api.html for details.
* See https://onnxruntime.ai/docs/get-started/with-c.html for details.
*
* \param[in] env ::OrtEnv instance
* \param[in] mem_info
@ -2663,7 +2663,7 @@ struct OrtApi {
*
* Create the configuration of an arena that can eventually be used to define an arena based allocator's behavior.
*
* Supported keys are (See https://onnxruntime.ai/docs/reference/api/c-api.html for details on what the
* Supported keys are (See https://onnxruntime.ai/docs/get-started/with-c.html for details on what the
* following parameters mean and how to choose these values.):
* "max_mem": Maximum memory that can be allocated by the arena based allocator.
* Use 0 for ORT to pick the best value. Default is 0.
@ -2819,7 +2819,7 @@ struct OrtApi {
/** \brief Set options in a TensorRT Execution Provider.
*
* Please refer to https://www.onnxruntime.ai/docs/reference/execution-providers/TensorRT-ExecutionProvider.html#c-api-example
* Please refer to https://onnxruntime.ai/docs/execution-providers/TensorRT-ExecutionProvider.html#cc
* to know the available keys and values. Key should be in null terminated string format of the member of ::OrtTensorRTProviderOptionsV2
* and value should be its related range.
*
@ -2880,7 +2880,7 @@ struct OrtApi {
* The behavior of this is exactly the same as OrtApi::CreateAndRegisterAllocator except
* instead of ORT creating an allocator based on provided info, in this case
* ORT uses the user-provided custom allocator.
* See https://onnxruntime.ai/docs/reference/api/c-api.html for details.
* See https://onnxruntime.ai/docs/get-started/with-c.html for details.
*
* \param[in] env
* \param[in] allocator User provided allocator

View file

@ -32,7 +32,7 @@ def mobileDescription = 'The ONNX Runtime Mobile package is a size optimized inf
'but with reduced disk footprint targeting mobile platforms. To minimize binary size this library supports a ' +
'reduced set of operators and types aligned to typical mobile applications. The ONNX model must be converted to ' +
'ORT format in order to use it with this package. ' +
'See https://onnxruntime.ai/docs/reference/ort-model-format.html for more details.'
'See https://onnxruntime.ai/docs/reference/ort-format-models.html for more details.'
def defaultDescription = 'ONNX Runtime is a performance-focused inference engine for ONNX (Open Neural Network ' +
'Exchange) models. This package contains the Android (aar) build of ONNX Runtime. It includes support for all ' +
'types and operators, for ONNX format models. All standard ONNX models can be executed with this package. ' +

View file

@ -243,7 +243,7 @@ It should be able to consumed by both from projects that uses NPM packages (thro
#### Reduced WebAssembly artifacts
By default, the WebAssembly artifacts from onnxruntime-web package allows use of both standard ONNX models (.onnx) and ORT format models (.ort). There is an option to use a minimal build of ONNX Runtime to reduce the binary size, which only supports ORT format models. See also [ORT format model](https://onnxruntime.ai/docs/tutorials/mobile/overview.html) for more information.
By default, the WebAssembly artifacts from onnxruntime-web package allows use of both standard ONNX models (.onnx) and ORT format models (.ort). There is an option to use a minimal build of ONNX Runtime to reduce the binary size, which only supports ORT format models. See also [ORT format model](https://onnxruntime.ai/docs/reference/ort-format-models.html) for more information.
#### Reduced JavaScript bundle file fize

View file

@ -41,7 +41,7 @@ typedef NS_ENUM(int32_t, ORTTensorElementDataType) {
/**
* The ORT graph optimization levels.
* See here for more details:
* https://www.onnxruntime.ai/docs/resources/graph-optimizations.html
* https://onnxruntime.ai/docs/performance/graph-optimizations.html
*/
typedef NS_ENUM(int32_t, ORTGraphOptimizationLevel) {
ORTGraphOptimizationLevelNone,

View file

@ -534,11 +534,11 @@ std::unique_ptr<IExecutionProvider> CreateExecutionProviderInstance(
return cuda_provider_info->CreateExecutionProviderFactory(info)->CreateProvider();
} else {
if (!Env::Default().GetEnvironmentVar("CUDA_PATH").empty()) {
ORT_THROW("CUDA_PATH is set but CUDA wasn't able to be loaded. Please install the correct version of CUDA and cuDNN as mentioned in the GPU requirements page (https://onnxruntime.ai/docs/reference/execution-providers/CUDA-ExecutionProvider.html#requirements), make sure they're in the PATH, and that your GPU is supported.");
ORT_THROW("CUDA_PATH is set but CUDA wasn't able to be loaded. Please install the correct version of CUDA and cuDNN as mentioned in the GPU requirements page (https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements), make sure they're in the PATH, and that your GPU is supported.");
}
}
}
LOGS_DEFAULT(WARNING) << "Failed to create " << type << ". Please reference https://onnxruntime.ai/docs/reference/execution-providers/CUDA-ExecutionProvider.html#requirements to ensure all dependencies are met.";
LOGS_DEFAULT(WARNING) << "Failed to create " << type << ". Please reference https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements to ensure all dependencies are met.";
#endif
} else if (type == kRocmExecutionProvider) {
#ifdef USE_ROCM

View file

@ -45,7 +45,7 @@
"```\n",
"Finally, launch Jupyter Notebook and you can choose gpu_env as kernel to run this notebook.\n",
"\n",
"Onnxruntime-gpu need specified version of CUDA and cuDNN. You can find the Requirements [here]( http://www.onnxruntime.ai/docs/how-to/install.html). Remember to add the directories to PATH environment variable (See [CUDA and cuDNN Path](#CUDA-and-cuDNN-Path) below)."
"Onnxruntime-gpu need specified version of CUDA and cuDNN. You can find the Requirements [here](https://onnxruntime.ai/docs/install/). Remember to add the directories to PATH environment variable (See [CUDA and cuDNN Path](#CUDA-and-cuDNN-Path) below)."
]
},
{
@ -348,7 +348,7 @@
"## 4. Inference ONNX Model with ONNX Runtime ##\n",
"\n",
"### CUDA and cuDNN Path\n",
"onnxruntime-gpu has dependency on [CUDA](https://developer.nvidia.com/cuda-downloads) and [cuDNN](https://developer.nvidia.com/cudnn). Required CUDA version can be found [here](http://www.onnxruntime.ai/docs/reference/execution-providers/CUDA-ExecutionProvider.html#requirements)\n"
"onnxruntime-gpu has dependency on [CUDA](https://developer.nvidia.com/cuda-downloads) and [cuDNN](https://developer.nvidia.com/cudnn). Required CUDA version can be found [here](https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements)\n"
]
},
{

View file

@ -197,7 +197,7 @@ def optimize_model(
):
"""Optimize Model by OnnxRuntime and/or python fusion logic.
ONNX Runtime has graph optimizations (https://onnxruntime.ai/docs/resources/graph-optimizations.html).
ONNX Runtime has graph optimizations (https://onnxruntime.ai/docs/performance/graph-optimizations.html).
However, the coverage is limited. We also have graph fusions that implemented in Python to improve the coverage.
They can combined: ONNX Runtime will run first when opt_level > 0, then graph fusions in Python will be applied.

View file

@ -325,7 +325,7 @@ if __name__ == "__main__":
type=str,
help="Path to configuration file. "
"Create with <ORT root>/tools/python/create_reduced_build_config.py and edit if needed. "
"See https://onnxruntime.ai/docs/reference/reduced-operator-config-file.html for more "
"See https://onnxruntime.ai/docs/reference/operators/reduced-operator-config-file.html for more "
"information.",
)

View file

@ -236,7 +236,7 @@ def run_check_with_model(
if unsupported:
logger.info("\nModel is not supported by the pre-built package due to unsupported types and/or operators.")
logger.info(
"Please see https://onnxruntime.ai/docs/reference/mobile/prebuilt-package/ for information "
"Please see https://onnxruntime.ai/docs/install/#install-on-web-and-mobile for information "
"on what is supported in the pre-built package."
)
logger.info(