Update onnx test runner documentation (#1651)

* Mention OrtCreateSessionFromArray in C API doc

* Update perf tool documentation to reflect the new graph optimization enums. Relax constraint for enable_all.

* Update one more doc

* Update onnx test runner documentation

* Add default in the docs
This commit is contained in:
Pranav Sharma 2019-08-19 18:28:09 -07:00 committed by GitHub
parent 224dde7ef1
commit 377dcf60ac
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
3 changed files with 14 additions and 11 deletions

View file

@ -72,7 +72,7 @@ sess_options.set_graph_optimization_level(2)
```
* sess_options.session_thread_pool_size=2 controls how many thread do you want to use to run your model
* sess_options.enable_sequential_execution=True controls whether you want to run operators in your graph sequentially or in parallel. Usually when your model has many branches, set this option to false will give you better performance.
* sess_options.set_graph_optimization_level(2). Please see onnxruntime_c_api.h (enum GraphOptimizationLevel) for the full list of all optimization levels.
* sess_options.set_graph_optimization_level(2). Default is 1. Please see [onnxruntime_c_api.h](../include/onnxruntime/core/session/onnxruntime_c_api.h#L241) (enum GraphOptimizationLevel) for the full list of all optimization levels.
### MKL_DNN/nGraph/MKL_ML Execution Provider
MKL_DNN, MKL_ML and nGraph all depends on openmp for parallization. For those execution providers, we need to use openmp enviroment variable to tune the performance.

View file

@ -38,11 +38,8 @@ void usage() {
"\t-e [EXECUTION_PROVIDER]: EXECUTION_PROVIDER could be 'cpu', 'cuda', 'mkldnn', 'tensorrt', 'ngraph' or 'openvino'. "
"Default: 'cpu'.\n"
"\t-x: Use parallel executor, default (without -x): sequential executor.\n"
"\t-o [optimization level]: Specifies the graph optimization level to enable. Valid values are 0 through 3. Default is 1.\n"
"\t\t0 -> Disable all optimizations\n"
"\t\t1 -> Enable basic optimizations\n"
"\t\t2 -> Enable extended optimizations\n"
"\t\t3 -> Enable extended+layout optimizations\n"
"\t-o [optimization level]: Default is 1. Valid values are 0 (disable), 1 (basic), 2 (extended), 99 (all).\n"
"\t\tPlease see onnxruntime_c_api.h (enum GraphOptimizationLevel) for the full list of all optimization levels. \n"
"\t-h: help\n"
"\n"
"onnxruntime version: %s\n",
@ -181,10 +178,15 @@ int real_main(int argc, char* argv[], Ort::Env& env) {
case ORT_ENABLE_ALL:
graph_optimization_level = ORT_ENABLE_ALL;
break;
default:
fprintf(stderr, "See usage for valid values of graph optimization level\n");
usage();
return -1;
default: {
if (tmp > ORT_ENABLE_ALL) { // relax constraint
graph_optimization_level = ORT_ENABLE_ALL;
} else {
fprintf(stderr, "See usage for valid values of graph optimization level\n");
usage();
return -1;
}
}
}
user_graph_optimization_level_set = true;
break;

View file

@ -43,7 +43,8 @@ namespace perftest {
"\t-v: Show verbose information.\n"
"\t-x [thread_size]: Session thread pool size, must >=0.\n"
"\t-P: Use parallel executor instead of sequential executor.\n"
"\t-o [optimization level]: Please see onnxruntime_c_api.h (enum GraphOptimizationLevel) for the full list of all optimization levels. \n"
"\t-o [optimization level]: Default is 1. Valid values are 0 (disable), 1 (basic), 2 (extended), 99 (all).\n"
"\t\tPlease see onnxruntime_c_api.h (enum GraphOptimizationLevel) for the full list of all optimization levels. \n"
"\t-h: help\n");
}