mirror of
https://github.com/saymrwulf/onnxruntime.git
synced 2026-05-17 21:10:43 +00:00
### Description `lintrunner` is a linter runner successfully used by pytorch, onnx and onnx-script. It provides a uniform experience running linters locally and in CI. It supports all major dev systems: Windows, Linux and MacOs. The checks are enforced by the `Python format` workflow. This PR adopts `lintrunner` to onnxruntime and fixed ~2000 flake8 errors in Python code. `lintrunner` now runs all required python lints including `ruff`(replacing `flake8`), `black` and `isort`. Future lints like `clang-format` can be added. Most errors are auto-fixed by `ruff` and the fixes should be considered robust. Lints that are more complicated to fix are applied `# noqa` for now and should be fixed in follow up PRs. ### Notable changes 1. This PR **removed some suboptimal patterns**: - `not xxx in` -> `xxx not in` membership checks - bare excepts (`except:` -> `except Exception`) - unused imports The follow up PR will remove: - `import *` - mutable values as default in function definitions (`def func(a=[])`) - more unused imports - unused local variables 2. Use `ruff` to replace `flake8`. `ruff` is much (40x) faster than flake8 and is more robust. We are using it successfully in onnx and onnx-script. It also supports auto-fixing many flake8 errors. 3. Removed the legacy flake8 ci flow and updated docs. 4. The added workflow supports SARIF code scanning reports on github, example snapshot:  5. Removed `onnxruntime-python-checks-ci-pipeline` as redundant ### Motivation and Context <!-- - Why is this change required? What problem does it solve? - If it fixes an open issue, please link to the issue here. --> Unified linting experience in CI and local. Replacing https://github.com/microsoft/onnxruntime/pull/14306 --------- Signed-off-by: Justin Chu <justinchu@microsoft.com>
58 lines
1.6 KiB
Python
58 lines
1.6 KiB
Python
# Copyright (c) Microsoft Corporation. All rights reserved.
|
|
# Licensed under the MIT License.
|
|
|
|
"""
|
|
|
|
.. _l-example-backend-api:
|
|
|
|
ONNX Runtime Backend for ONNX
|
|
=============================
|
|
|
|
*ONNX Runtime* extends the
|
|
`onnx backend API <https://github.com/onnx/onnx/blob/main/docs/ImplementingAnOnnxBackend.md>`_
|
|
to run predictions using this runtime.
|
|
Let's use the API to compute the prediction
|
|
of a simple logistic regression model.
|
|
"""
|
|
import numpy as np
|
|
from onnx import load
|
|
|
|
import onnxruntime.backend as backend
|
|
|
|
########################################
|
|
# The device depends on how the package was compiled,
|
|
# GPU or CPU.
|
|
from onnxruntime import datasets, get_device
|
|
from onnxruntime.capi.onnxruntime_pybind11_state import InvalidArgument
|
|
|
|
device = get_device()
|
|
|
|
name = datasets.get_example("logreg_iris.onnx")
|
|
model = load(name)
|
|
|
|
rep = backend.prepare(model, device)
|
|
x = np.array([[-1.0, -2.0]], dtype=np.float32)
|
|
try:
|
|
label, proba = rep.run(x)
|
|
print(f"label={label}")
|
|
print(f"probabilities={proba}")
|
|
except (RuntimeError, InvalidArgument) as e:
|
|
print(e)
|
|
|
|
########################################
|
|
# The backend can also directly load the model
|
|
# without using *onnx*.
|
|
|
|
rep = backend.prepare(name, device)
|
|
x = np.array([[-1.0, -2.0]], dtype=np.float32)
|
|
try:
|
|
label, proba = rep.run(x)
|
|
print(f"label={label}")
|
|
print(f"probabilities={proba}")
|
|
except (RuntimeError, InvalidArgument) as e:
|
|
print(e)
|
|
|
|
#######################################
|
|
# The backend API is implemented by other frameworks
|
|
# and makes it easier to switch between multiple runtimes
|
|
# with the same API.
|