Remove unnecessary option from convert_onnx_models_to_ort.py, fix old instructions. (#11088)

Remove unnecessary --nnapi_partitioning_stop_ops option from convert_onnx_models_to_ort.py, fix old instructions.
This commit is contained in:
Edward Chen 2022-04-11 11:19:21 -07:00 committed by GitHub
parent 00b595e389
commit 269be2fe63
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
5 changed files with 15 additions and 19 deletions

View file

@ -8,5 +8,5 @@ DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
cd ${DIR}
python3 ./single_add_gen.py
python3 -m onnxruntime.tools.convert_onnx_models_to_ort --optimization_level basic .
ORT_CONVERT_ONNX_MODELS_TO_ORT_OPTIMIZATION_LEVEL=basic python3 -m onnxruntime.tools.convert_onnx_models_to_ort --optimization_style=Fixed .

View file

@ -350,7 +350,7 @@ TEST(GraphRuntimeOptimizationTest, TestNhwcTransformerDirectlyUpdatesQLinearConv
// - set environment variable ORT_CONVERT_ONNX_MODELS_TO_ORT_OPTIMIZATION_LEVEL=extended
// - run:
// python -m onnxruntime.tools.convert_onnx_models_to_ort
// --optimization_style Fixed
// --optimization_style=Fixed
// testdata/transform/runtime_optimization/qdq_convs.onnx
ORT_TSTR("testdata/transform/runtime_optimization/qdq_convs.extended.ort"),
[](const OpCountMap& loaded_ops, const OpCountMap& initialized_ops) {

View file

@ -5,6 +5,8 @@ We also save both ONNX and ORT format versions of the model with level 1 (aka 'b
required_ops.config, which is used in the reduced ops CI build.
- mnist.level1_opt.ort is used in NNAPI unit tests.
The level 1 optimized model files can be generated by running the following command from the repo root and renaming the
resulting .onnx and .ort files accordingly:
$ python ./tools/python/convert_onnx_models_to_ort.py --optimization_level basic --save_optimized_onnx_model ./onnxruntime/test/testdata/mnist.onnx
The level 1 optimized model files can be generated with the following steps:
- Set environment variable ORT_CONVERT_ONNX_MODELS_TO_ORT_OPTIMIZATION_LEVEL=basic
- From this directory, run
$ python -m onnxruntime.tools.convert_onnx_models_to_ort --optimization_style=Fixed --save_optimized_onnx_model ./mnist.onnx
- Rename the resulting .onnx and .ort files accordingly

View file

@ -1,13 +1,15 @@
This directory contains ORT format models to test for backwards compatibility when we are forced to make an update that invalidates a kernel hash.
When this happens, first create a new directory for the currently released ORT version.
When this happens, first create a directory for the currently released ORT version if one doesn't already exist.
Find a model that uses the operator with the kernel hash change. The ONNX test data is generally a good place to do this. See cmake/external/onnx/onnx/backend/test/data/node
Find a model that uses the operator with the kernel hash change and copy it to the directory for the currently released ORT version.
The ONNX test data is generally a good place to do this. See cmake/external/onnx/onnx/backend/test/data/node.
Convert the model to ORT format using the currently released ORT version. This model will contain the original hash.
e.g.
Running `python -m onnxruntime.tools.convert_onnx_models_to_ort --optimization_level=basic ORTv1.10/not1.onnx`
Setting environment variable ORT_CONVERT_ONNX_MODELS_TO_ORT_OPTIMIZATION_LEVEL=basic
and then running `python -m onnxruntime.tools.convert_onnx_models_to_ort --optimization_style=Fixed ORTv1.10/not1.onnx`
will create the ORT format model `not1.basic.ort`
Add both the ONNX and the ORT format models to the repository.

View file

@ -189,11 +189,6 @@ def parse_args():
parser.add_argument('--allow_conversion_failures', action='store_true',
help='Whether to proceed after encountering model conversion failures.')
parser.add_argument('--nnapi_partitioning_stop_ops',
help='Specify the list of NNAPI EP partitioning stop ops. '
'In particular, specify the value of the "ep.nnapi.partitioning_stop_ops" session '
'options config entry.')
parser.add_argument('--target_platform', type=str, default=None, choices=['arm', 'amd64'],
help='Specify the target platform where the exported model will be used. '
'This parameter can be used to choose between platform-specific options, '
@ -226,9 +221,6 @@ def convert_onnx_models_to_ort():
session_options_config_entries = {}
if args.nnapi_partitioning_stop_ops is not None:
session_options_config_entries["ep.nnapi.partitioning_stop_ops"] = args.nnapi_partitioning_stop_ops
if args.target_platform == 'arm':
session_options_config_entries["session.qdqisint8allowed"] = "1"
else: