onnxruntime/tools/python/util/check_onnx_model_mobile_usability.py
Justin Chu d834ec895a
Adopt linrtunner as the linting tool - take 2 (#15085)
### Description

`lintrunner` is a linter runner successfully used by pytorch, onnx and
onnx-script. It provides a uniform experience running linters locally
and in CI. It supports all major dev systems: Windows, Linux and MacOs.
The checks are enforced by the `Python format` workflow.

This PR adopts `lintrunner` to onnxruntime and fixed ~2000 flake8 errors
in Python code. `lintrunner` now runs all required python lints
including `ruff`(replacing `flake8`), `black` and `isort`. Future lints
like `clang-format` can be added.

Most errors are auto-fixed by `ruff` and the fixes should be considered
robust.

Lints that are more complicated to fix are applied `# noqa` for now and
should be fixed in follow up PRs.

### Notable changes

1. This PR **removed some suboptimal patterns**:

	- `not xxx in` -> `xxx not in` membership checks
	- bare excepts (`except:` -> `except Exception`)
	- unused imports
	
	The follow up PR will remove:
	
	- `import *`
	- mutable values as default in function definitions (`def func(a=[])`)
	- more unused imports
	- unused local variables

2. Use `ruff` to replace `flake8`. `ruff` is much (40x) faster than
flake8 and is more robust. We are using it successfully in onnx and
onnx-script. It also supports auto-fixing many flake8 errors.

3. Removed the legacy flake8 ci flow and updated docs.

4. The added workflow supports SARIF code scanning reports on github,
example snapshot:
	

![image](https://user-images.githubusercontent.com/11205048/212598953-d60ce8a9-f242-4fa8-8674-8696b704604a.png)

5. Removed `onnxruntime-python-checks-ci-pipeline` as redundant

### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->

Unified linting experience in CI and local.

Replacing https://github.com/microsoft/onnxruntime/pull/14306

---------

Signed-off-by: Justin Chu <justinchu@microsoft.com>
2023-03-24 15:29:03 -07:00

67 lines
2.7 KiB
Python

# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
import argparse
import logging
import pathlib
# need this before the mobile helper imports for some reason
logging.basicConfig(format="%(levelname)s: %(message)s")
from .mobile_helpers import check_model_can_use_ort_mobile_pkg, usability_checker # noqa: E402
def check_usability():
parser = argparse.ArgumentParser(
description="""Analyze an ONNX model to determine how well it will work in mobile scenarios, and whether
it is likely to be able to use the pre-built ONNX Runtime Mobile Android or iOS package.""",
formatter_class=argparse.ArgumentDefaultsHelpFormatter,
)
parser.add_argument(
"--config_path",
help="Path to required operators and types configuration used to build " "the pre-built ORT mobile package.",
required=False,
type=pathlib.Path,
default=check_model_can_use_ort_mobile_pkg.get_default_config_path(),
)
parser.add_argument(
"--log_level", choices=["debug", "info", "warning", "error"], default="info", help="Logging level"
)
parser.add_argument("model_path", help="Path to ONNX model to check", type=pathlib.Path)
args = parser.parse_args()
logger = logging.getLogger("check_usability")
if args.log_level == "debug":
logger.setLevel(logging.DEBUG)
elif args.log_level == "info":
logger.setLevel(logging.INFO)
elif args.log_level == "warning":
logger.setLevel(logging.WARNING)
else:
logger.setLevel(logging.ERROR)
try_eps = usability_checker.analyze_model(args.model_path, skip_optimize=False, logger=logger)
check_model_can_use_ort_mobile_pkg.run_check(args.model_path, args.config_path, logger)
logger.info(
"Run `python -m onnxruntime.tools.convert_onnx_models_to_ort ...` to convert the ONNX model to ORT "
"format. "
"By default, the conversion tool will create an ORT format model with saved optimizations which can "
"potentially be applied at runtime (with a .with_runtime_opt.ort file extension) for use with NNAPI "
"or CoreML, and a fully optimized ORT format model (with a .ort file extension) for use with the CPU "
"EP."
)
if try_eps:
logger.info(
"As NNAPI or CoreML may provide benefits with this model it is recommended to compare the "
"performance of the <model>.with_runtime_opt.ort model using the NNAPI EP on Android, and the "
"CoreML EP on iOS, against the performance of the <model>.ort model using the CPU EP."
)
else:
logger.info("For optimal performance the <model>.ort model should be used with the CPU EP. ")
if __name__ == "__main__":
check_usability()