onnxruntime/docs/python/inference/examples/plot_metadata.py
Justin Chu d834ec895a
Adopt linrtunner as the linting tool - take 2 (#15085)
### Description

`lintrunner` is a linter runner successfully used by pytorch, onnx and
onnx-script. It provides a uniform experience running linters locally
and in CI. It supports all major dev systems: Windows, Linux and MacOs.
The checks are enforced by the `Python format` workflow.

This PR adopts `lintrunner` to onnxruntime and fixed ~2000 flake8 errors
in Python code. `lintrunner` now runs all required python lints
including `ruff`(replacing `flake8`), `black` and `isort`. Future lints
like `clang-format` can be added.

Most errors are auto-fixed by `ruff` and the fixes should be considered
robust.

Lints that are more complicated to fix are applied `# noqa` for now and
should be fixed in follow up PRs.

### Notable changes

1. This PR **removed some suboptimal patterns**:

	- `not xxx in` -> `xxx not in` membership checks
	- bare excepts (`except:` -> `except Exception`)
	- unused imports
	
	The follow up PR will remove:
	
	- `import *`
	- mutable values as default in function definitions (`def func(a=[])`)
	- more unused imports
	- unused local variables

2. Use `ruff` to replace `flake8`. `ruff` is much (40x) faster than
flake8 and is more robust. We are using it successfully in onnx and
onnx-script. It also supports auto-fixing many flake8 errors.

3. Removed the legacy flake8 ci flow and updated docs.

4. The added workflow supports SARIF code scanning reports on github,
example snapshot:
	

![image](https://user-images.githubusercontent.com/11205048/212598953-d60ce8a9-f242-4fa8-8674-8696b704604a.png)

5. Removed `onnxruntime-python-checks-ci-pipeline` as redundant

### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->

Unified linting experience in CI and local.

Replacing https://github.com/microsoft/onnxruntime/pull/14306

---------

Signed-off-by: Justin Chu <justinchu@microsoft.com>
2023-03-24 15:29:03 -07:00

46 lines
1.3 KiB
Python

# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
"""
Metadata
========
ONNX format contains metadata related to how the
model was produced. It is useful when the model
is deployed to production to keep track of which
instance was used at a specific time.
Let's see how to do that with a simple
logistic regression model trained with
*scikit-learn* and converted with *sklearn-onnx*.
"""
from onnxruntime.datasets import get_example
example = get_example("logreg_iris.onnx")
import onnx # noqa: E402
model = onnx.load(example)
print(f"doc_string={model.doc_string}")
print(f"domain={model.domain}")
print(f"ir_version={model.ir_version}")
print(f"metadata_props={model.metadata_props}")
print(f"model_version={model.model_version}")
print(f"producer_name={model.producer_name}")
print(f"producer_version={model.producer_version}")
#############################
# With *ONNX Runtime*:
import onnxruntime as rt # noqa: E402
sess = rt.InferenceSession(example, providers=rt.get_available_providers())
meta = sess.get_modelmeta()
print(f"custom_metadata_map={meta.custom_metadata_map}")
print(f"description={meta.description}")
print(f"domain={meta.domain}")
print(f"graph_name={meta.graph_name}")
print(f"producer_name={meta.producer_name}")
print(f"version={meta.version}")