mirror of
https://github.com/saymrwulf/onnxruntime.git
synced 2026-05-14 20:48:00 +00:00
### Description `lintrunner` is a linter runner successfully used by pytorch, onnx and onnx-script. It provides a uniform experience running linters locally and in CI. It supports all major dev systems: Windows, Linux and MacOs. The checks are enforced by the `Python format` workflow. This PR adopts `lintrunner` to onnxruntime and fixed ~2000 flake8 errors in Python code. `lintrunner` now runs all required python lints including `ruff`(replacing `flake8`), `black` and `isort`. Future lints like `clang-format` can be added. Most errors are auto-fixed by `ruff` and the fixes should be considered robust. Lints that are more complicated to fix are applied `# noqa` for now and should be fixed in follow up PRs. ### Notable changes 1. This PR **removed some suboptimal patterns**: - `not xxx in` -> `xxx not in` membership checks - bare excepts (`except:` -> `except Exception`) - unused imports The follow up PR will remove: - `import *` - mutable values as default in function definitions (`def func(a=[])`) - more unused imports - unused local variables 2. Use `ruff` to replace `flake8`. `ruff` is much (40x) faster than flake8 and is more robust. We are using it successfully in onnx and onnx-script. It also supports auto-fixing many flake8 errors. 3. Removed the legacy flake8 ci flow and updated docs. 4. The added workflow supports SARIF code scanning reports on github, example snapshot:  5. Removed `onnxruntime-python-checks-ci-pipeline` as redundant ### Motivation and Context <!-- - Why is this change required? What problem does it solve? - If it fixes an open issue, please link to the issue here. --> Unified linting experience in CI and local. Replacing https://github.com/microsoft/onnxruntime/pull/14306 --------- Signed-off-by: Justin Chu <justinchu@microsoft.com> |
||
|---|---|---|
| .. | ||
| ort_train.py | ||
| ort_utils.py | ||
| pt_model.py | ||
| pt_train.py | ||
| README.md | ||
| utils.py | ||
TransformerModel example
This example was adapted from Pytorch's Sequence-to-Sequence Modeling with nn.Transformer and TorchText tutorial
Requirements
- PyTorch 1.6+
- TorchText 0.6+
- ONNX Runtime 1.5+
Running PyTorch version
python pt_train.py
Running ONNX Runtime version
python ort_train.py
Optional arguments
| Argument | Description | Default |
|---|---|---|
| --batch-size | input batch size for training | 20 |
| --test-batch-size | input batch size for testing | 20 |
| --epochs | number of epochs to train | 2 |
| --lr | learning rate | 0.001 |
| --no-cuda | disables CUDA training | False |
| --seed | random seed | 1 |
| --log-interval | how many batches to wait before logging training status | 200 |