onnxruntime/docs/python/inference/examples/plot_pipeline.py
Justin Chu d834ec895a
Adopt linrtunner as the linting tool - take 2 (#15085)
### Description

`lintrunner` is a linter runner successfully used by pytorch, onnx and
onnx-script. It provides a uniform experience running linters locally
and in CI. It supports all major dev systems: Windows, Linux and MacOs.
The checks are enforced by the `Python format` workflow.

This PR adopts `lintrunner` to onnxruntime and fixed ~2000 flake8 errors
in Python code. `lintrunner` now runs all required python lints
including `ruff`(replacing `flake8`), `black` and `isort`. Future lints
like `clang-format` can be added.

Most errors are auto-fixed by `ruff` and the fixes should be considered
robust.

Lints that are more complicated to fix are applied `# noqa` for now and
should be fixed in follow up PRs.

### Notable changes

1. This PR **removed some suboptimal patterns**:

	- `not xxx in` -> `xxx not in` membership checks
	- bare excepts (`except:` -> `except Exception`)
	- unused imports
	
	The follow up PR will remove:
	
	- `import *`
	- mutable values as default in function definitions (`def func(a=[])`)
	- more unused imports
	- unused local variables

2. Use `ruff` to replace `flake8`. `ruff` is much (40x) faster than
flake8 and is more robust. We are using it successfully in onnx and
onnx-script. It also supports auto-fixing many flake8 errors.

3. Removed the legacy flake8 ci flow and updated docs.

4. The added workflow supports SARIF code scanning reports on github,
example snapshot:
	

![image](https://user-images.githubusercontent.com/11205048/212598953-d60ce8a9-f242-4fa8-8674-8696b704604a.png)

5. Removed `onnxruntime-python-checks-ci-pipeline` as redundant

### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->

Unified linting experience in CI and local.

Replacing https://github.com/microsoft/onnxruntime/pull/14306

---------

Signed-off-by: Justin Chu <justinchu@microsoft.com>
2023-03-24 15:29:03 -07:00

70 lines
1.6 KiB
Python

# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
"""
Draw a pipeline
===============
There is no other way to look into one model stored
in ONNX format than looking into its node with
*onnx*. This example demonstrates
how to draw a model and to retrieve it in *json*
format.
.. contents::
:local:
Retrieve a model in JSON format
+++++++++++++++++++++++++++++++
That's the most simple way.
"""
from onnxruntime.datasets import get_example
example1 = get_example("mul_1.onnx")
import onnx # noqa: E402
model = onnx.load(example1) # model is a ModelProto protobuf message
print(model)
#################################
# Draw a model with ONNX
# ++++++++++++++++++++++
# We use `net_drawer.py <https://github.com/onnx/onnx/blob/main/onnx/tools/net_drawer.py>`_
# included in *onnx* package.
# We use *onnx* to load the model
# in a different way than before.
from onnx import ModelProto # noqa: E402
model = ModelProto()
with open(example1, "rb") as fid:
content = fid.read()
model.ParseFromString(content)
###################################
# We convert it into a graph.
from onnx.tools.net_drawer import GetOpNodeProducer, GetPydotGraph # noqa: E402
pydot_graph = GetPydotGraph(
model.graph, name=model.graph.name, rankdir="LR", node_producer=GetOpNodeProducer("docstring")
)
pydot_graph.write_dot("graph.dot")
#######################################
# Then into an image
import os # noqa: E402
os.system("dot -O -Tpng graph.dot")
################################
# Which we display...
import matplotlib.pyplot as plt # noqa: E402
image = plt.imread("graph.dot.png")
plt.imshow(image)