mirror of
https://github.com/saymrwulf/onnxruntime.git
synced 2026-05-15 20:50:42 +00:00
### Description `lintrunner` is a linter runner successfully used by pytorch, onnx and onnx-script. It provides a uniform experience running linters locally and in CI. It supports all major dev systems: Windows, Linux and MacOs. The checks are enforced by the `Python format` workflow. This PR adopts `lintrunner` to onnxruntime and fixed ~2000 flake8 errors in Python code. `lintrunner` now runs all required python lints including `ruff`(replacing `flake8`), `black` and `isort`. Future lints like `clang-format` can be added. Most errors are auto-fixed by `ruff` and the fixes should be considered robust. Lints that are more complicated to fix are applied `# noqa` for now and should be fixed in follow up PRs. ### Notable changes 1. This PR **removed some suboptimal patterns**: - `not xxx in` -> `xxx not in` membership checks - bare excepts (`except:` -> `except Exception`) - unused imports The follow up PR will remove: - `import *` - mutable values as default in function definitions (`def func(a=[])`) - more unused imports - unused local variables 2. Use `ruff` to replace `flake8`. `ruff` is much (40x) faster than flake8 and is more robust. We are using it successfully in onnx and onnx-script. It also supports auto-fixing many flake8 errors. 3. Removed the legacy flake8 ci flow and updated docs. 4. The added workflow supports SARIF code scanning reports on github, example snapshot:  5. Removed `onnxruntime-python-checks-ci-pipeline` as redundant ### Motivation and Context <!-- - Why is this change required? What problem does it solve? - If it fixes an open issue, please link to the issue here. --> Unified linting experience in CI and local. Replacing https://github.com/microsoft/onnxruntime/pull/14306 --------- Signed-off-by: Justin Chu <justinchu@microsoft.com>
55 lines
1.6 KiB
Python
55 lines
1.6 KiB
Python
# Copyright (c) Microsoft Corporation. All rights reserved.
|
|
# Licensed under the MIT License.
|
|
|
|
"""
|
|
.. _l-example-simple-usage:
|
|
|
|
Load and predict with ONNX Runtime and a very simple model
|
|
==========================================================
|
|
|
|
This example demonstrates how to load a model and compute
|
|
the output for an input vector. It also shows how to
|
|
retrieve the definition of its inputs and outputs.
|
|
"""
|
|
|
|
import numpy
|
|
|
|
import onnxruntime as rt
|
|
from onnxruntime.datasets import get_example
|
|
|
|
#########################
|
|
# Let's load a very simple model.
|
|
# The model is available on github `onnx...test_sigmoid <https://github.com/onnx/onnx/blob/main/onnx/backend/test/data/node/test_sigmoid>`_.
|
|
|
|
example1 = get_example("sigmoid.onnx")
|
|
sess = rt.InferenceSession(example1, providers=rt.get_available_providers())
|
|
|
|
#########################
|
|
# Let's see the input name and shape.
|
|
|
|
input_name = sess.get_inputs()[0].name
|
|
print("input name", input_name)
|
|
input_shape = sess.get_inputs()[0].shape
|
|
print("input shape", input_shape)
|
|
input_type = sess.get_inputs()[0].type
|
|
print("input type", input_type)
|
|
|
|
#########################
|
|
# Let's see the output name and shape.
|
|
|
|
output_name = sess.get_outputs()[0].name
|
|
print("output name", output_name)
|
|
output_shape = sess.get_outputs()[0].shape
|
|
print("output shape", output_shape)
|
|
output_type = sess.get_outputs()[0].type
|
|
print("output type", output_type)
|
|
|
|
#########################
|
|
# Let's compute its outputs (or predictions if it is a machine learned model).
|
|
|
|
import numpy.random # noqa: E402
|
|
|
|
x = numpy.random.random((3, 4, 5))
|
|
x = x.astype(numpy.float32)
|
|
res = sess.run([output_name], {input_name: x})
|
|
print(res)
|