Commit graph

26 commits

Author SHA1 Message Date
Justin Chu
0c1a5098dc
Disable PERF* rules in ruff to allow better readability (#16834)
### Description

Disable two PERF* rules in ruff to allow better readability. Rational
commented inline. This change also removes the unused noqa directives
because of the rule change.

### Motivation and Context

Readability
2023-07-25 15:38:22 -07:00
Justin Chu
d79515041c
[Better Engineering] Bump ruff to 0.0.278 and fix new lint errors (#16789)
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at
bottom):
* __->__ #16789

Bump ruff to 0.0.278 and fix new lint errors. I added noqa to all
existing RUF012 errors which requires mutable class variables to be
annotated with `ClassVar`, as well as all PERF issues.

Signed-off-by: Justin Chu <justinchu@microsoft.com>
2023-07-21 12:53:41 -07:00
Xavier Dupré
e726151b5c
Introduce float 8 types (#14731)
### Description
The PR implements FloatE4M3FN, FloatE5M2, FloatE4MEFNUZ, FloatE5M2FNUZ
as described in PR https://github.com/onnx/onnx/pull/4805. It uses CUDA
API to cast float/half to float8 if CUDA>=11.8, a custom implementation
if CUDA<11.8.

* It implements, Cast, QuantizeLinear, DequantizeLinear for all types on
CPU, only for types FloatE4M3FN, FloatE5M2 on CUDA.
* It extends the supported types for control flow operator, Shape,
Reshape, Identity, If, Loop, Scan, Reshape
* It implements Equal(19).
* Cast, QuantizeLinear, DequantizeLinear operators now support a
parameter `saturate` only valid for float 8 types. It is true by
default. In that case, any value out of range is converted into the
maximum float 8 value. If false, it is infinite.
* QuantizeLinear, DequantizeLinear now supports multiple scales on CUDA
(and ROCm by extension), scale = 1D tensor with one scale per channel

### Motivation and Context
Supports latest onnx version.

Fixes
[AB#15395](https://aiinfra.visualstudio.com/6a833879-cd9b-44a4-a9de-adc2d818f13c/_workitems/edit/15395)

---------

Co-authored-by: Xavier Dupre <xadupre@microsoft.com@orttrainingdev8.d32nl1ml4oruzj4qz3bqlggovf.px.internal.cloudapp.net>
Co-authored-by: Randy Shuai <rashuai@microsoft.com>
Co-authored-by: Edward Chen <18449977+edgchen1@users.noreply.github.com>
Co-authored-by: Scott McKay <Scott.McKay@microsoft.com>
2023-05-30 13:25:58 -07:00
Justin Chu
d834ec895a
Adopt linrtunner as the linting tool - take 2 (#15085)
### Description

`lintrunner` is a linter runner successfully used by pytorch, onnx and
onnx-script. It provides a uniform experience running linters locally
and in CI. It supports all major dev systems: Windows, Linux and MacOs.
The checks are enforced by the `Python format` workflow.

This PR adopts `lintrunner` to onnxruntime and fixed ~2000 flake8 errors
in Python code. `lintrunner` now runs all required python lints
including `ruff`(replacing `flake8`), `black` and `isort`. Future lints
like `clang-format` can be added.

Most errors are auto-fixed by `ruff` and the fixes should be considered
robust.

Lints that are more complicated to fix are applied `# noqa` for now and
should be fixed in follow up PRs.

### Notable changes

1. This PR **removed some suboptimal patterns**:

	- `not xxx in` -> `xxx not in` membership checks
	- bare excepts (`except:` -> `except Exception`)
	- unused imports
	
	The follow up PR will remove:
	
	- `import *`
	- mutable values as default in function definitions (`def func(a=[])`)
	- more unused imports
	- unused local variables

2. Use `ruff` to replace `flake8`. `ruff` is much (40x) faster than
flake8 and is more robust. We are using it successfully in onnx and
onnx-script. It also supports auto-fixing many flake8 errors.

3. Removed the legacy flake8 ci flow and updated docs.

4. The added workflow supports SARIF code scanning reports on github,
example snapshot:
	

![image](https://user-images.githubusercontent.com/11205048/212598953-d60ce8a9-f242-4fa8-8674-8696b704604a.png)

5. Removed `onnxruntime-python-checks-ci-pipeline` as redundant

### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->

Unified linting experience in CI and local.

Replacing https://github.com/microsoft/onnxruntime/pull/14306

---------

Signed-off-by: Justin Chu <justinchu@microsoft.com>
2023-03-24 15:29:03 -07:00
Justin Chu
fdce4fa6af
Format all python files under onnxruntime with black and isort (#11324)
Description: Format all python files under onnxruntime with black and isort.

After checking in, we can use .git-blame-ignore-revs to ignore the formatting PR in git blame.

#11315, #11316
2022-04-26 09:35:16 -07:00
Edward Chen
e53422c6d0
Update convert_onnx_models_to_ort.py to support runtime optimizations. (#10765)
Add runtime optimization support to ONNX -> ORT format conversion script.
Replace `--optimization_level`, `--use_nnapi`, and `--use_coreml` with a new `--optimization_style` option.
2022-03-14 16:50:41 -07:00
Scott McKay
6072c6b65e
Simplify QLinearConv registration so type reduction works with it. (#10747)
* Simplify QLinearConv registration so type reduction works with it.
* Update QLinearMatMul registration to be a standard typed registration
2022-03-04 14:06:04 +10:00
Scott McKay
2ca9566994
Add range of helpers for making usage of ORT Mobile easier. (#10458)
* Add range of helpers for making usage of ORT Mobile easier.
2022-02-18 07:35:25 +10:00
Guoyu Wang
5ad6dbb314
Remove experimental from ORT format namespace (#9729)
* schema change

* cc channges

* remove temp debug code

* Adding fbs namespace to session_state_flatbuffers_utils.h

* Add fbs namepsace to all ort format utils
2021-11-11 19:46:30 -08:00
Edward Chen
011cb8fd48
Fix Where op type reduction processing (#9033)
* Update type reduction script to track Where Op's second input type.

* Clean up op_kernel_type_control.h includes.

* Use more maintainable include.
2021-09-13 08:37:58 -07:00
Scott McKay
858989293d
Reduce binary size of strided copy used by Concat (#8913)
* Change the strided copy to switch on data size not data type.
Move to header so we can reduce on the enabled types.
Setup type reduction for Concat now that it's using this implementation.
2021-09-02 08:19:20 +10:00
Scott McKay
57782b3463
Add supported operators/types documentation for the ORT Mobile package (#7807)
* Add ability to generate documentation for the ORT Mobile package using the build configuration as input.
2021-05-26 15:57:40 +10:00
Scott McKay
d6df5764d7
Android package infrastructure (#7430)
* Include ORT format model conversion scripts and infrastructure in ORT python package.
  - tweak existing script setup so it can be easily run directly and from the ORT python package
Add config file and readme for Android minimal build package
Update ORT Mobile doco
Disable warning if 'all' optimizations are enabled but NCHWc transformer is excluded (device specific optimizations don't apply in this scenario so the warning is moot).

* Address PR comments
2021-04-30 14:23:54 +10:00
Scott McKay
329fd03bb4
Add int32_t as required type to some operators (#7192)
* Updates to some operators to always support int32 and int64 based on testing of Android package build config with a minimal build.

If an operator can be used for shape manipulation (int64) it is frequently used for indices manipulation (int32), so we enable both types for that set of ops.
  - e.g. BERT models take indices as input
  - Scatter/Gather ops utilize indices

Misc. fix to python bindings to exclude call that fails in a minimal build.
2021-04-01 19:32:34 +10:00
Edward Chen
0ccfe6c86a
Enable type reduction for Scatter/ScatterElements CPU kernels (#7171)
Enable type reduction for Scatter/ScatterElements CPU kernels. Some refactoring to reduce binary size.
Add MLTypeCallDispatcher methods.
Minor cleanup for Pad CPU kernel.
2021-03-30 11:02:24 -07:00
Edward Chen
53392664d3
Enable type reduction for Shrink, Sign, SplitToSequence CPU kernels (#7090)
Enable type reduction for Shrink, Sign, SplitToSequence CPU kernels.
Some other type reduction changes including refactoring to specify element types in a single place.
2021-03-23 09:57:33 -07:00
Edward Chen
4cbb8e166a
Update kernel def hashing (#7019)
Update the kernel def hashing in ORT format models. The new hashing logic ignores the ordering of type constraint types.
This is a backward compatibility breaking change, but we don't guarantee backward compatibility yet.
2021-03-22 09:28:27 -07:00
Edward Chen
aa60a8368f
Update type reduction operator type usage processors set. (#6976) 2021-03-11 09:22:53 -08:00
Edward Chen
b6c4a7ac54
Support required types when excluding typed registrations (#6871) 2021-03-08 08:22:07 -08:00
jingyanwangms
f22f04a109
Add comment (#6860)
Co-authored-by: Jingyan Wang <jingywa@OrtTrainingDev3.af05slrtruoetgaxwwjv5nsq5e.px.internal.cloudapp.net>
2021-03-02 18:54:25 -08:00
Scott McKay
02c7873b0e
Update ORT model conversion script to support custom ops (#6701)
* Add support for custom ops library to the ORT model conversion script
Simplify model conversion now that we read ops from the ORT format model.
Enable custom ops in the python bindings if custom ops are turned on in a minimal build.
* Add test of model conversion involving custom ops.
2021-02-17 12:52:39 +10:00
Scott McKay
c5d2538314
Add more kernels that have typed registrations to the operators we track type usage for. (#6565) 2021-02-05 15:10:54 +10:00
Scott McKay
c49d1dbc4b
Add type reduction support to Slice and Transpose (#6547)
* Add type reduction support to Slice and Transpose
2021-02-05 11:08:23 +10:00
Scott McKay
6cb8f8c812
Support disabling a typed kernel registration that uses the output type (#6530)
* Update infrastructure to support disabling a typed kernel registration that uses output 0 for the type (vs. the normal use case of input 0).
2021-02-03 14:22:32 +10:00
Scott McKay
8d53ef69e5
Add type reduction support to Min, Max and Pow (#6519)
* Add type reduction support to Min, Max and Pow
Update the C++ type reduction infrastructure to allow specifying an opset for the supported types list, as those can change across opset versions.
Minor updates to the type usage tracking script
* Add 'all opsets' macros and constant
2021-02-03 06:51:35 +10:00
Scott McKay
c84bb9df9f
Add ability to track per operator types in reduced build config. (#6428)
* Add ability to generate configuration that includes required types for individual operators, to allow build size reduction based on that.
  - Add python bindings for ORT format models
    - Add script to update bindings and help info
  - Add parsing of ORT format models
  - Add ability to enable type reduction to config generation
  - Update build.py to only allow operator/type reduction via config
    - simpler to require config to be generated first
    - can't mix a type aware (ORT format model only) and non-type aware config as that may result in insufficient types being enabled
  - Add script to create reduced build config
  - Update CIs
2021-01-29 07:59:51 +10:00