pytorch/benchmarks/operator_benchmark/pt/stack_test.py
Sam Estep e3900d2ba5 Add lint for unqualified noqa (#56272)
Summary:
As this diff shows, currently there are a couple hundred instances of raw `noqa` in the codebase, which just ignore all errors on a given line. That isn't great, so this PR changes all existing instances of that antipattern to qualify the `noqa` with respect to a specific error code, and adds a lint to prevent more of this from happening in the future.

Interestingly, some of the examples the `noqa` lint catches are genuine attempts to qualify the `noqa` with a specific error code, such as these two:
```
test/jit/test_misc.py:27:            print(f"{hello + ' ' + test}, I'm a {test}") # noqa E999
test/jit/test_misc.py:28:            print(f"format blank") # noqa F541
```
However, those are still wrong because they are [missing a colon](https://flake8.pycqa.org/en/3.9.1/user/violations.html#in-line-ignoring-errors), which actually causes the error code to be completely ignored:

- If you change them to anything else, the warnings will still be suppressed.
- If you add the necessary colons then it is revealed that `E261` was also being suppressed, unintentionally:
  ```
  test/jit/test_misc.py:27:57: E261 at least two spaces before inline comment
  test/jit/test_misc.py:28:35: E261 at least two spaces before inline comment
  ```

I did try using [flake8-noqa](https://pypi.org/project/flake8-noqa/) instead of a custom `git grep` lint, but it didn't seem to work. This PR is definitely missing some of the functionality that flake8-noqa is supposed to provide, though, so if someone can figure out how to use it, we should do that instead.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56272

Test Plan:
CI should pass on the tip of this PR, and we know that the lint works because the following CI run (before this PR was finished) failed:

- https://github.com/pytorch/pytorch/runs/2365189927

Reviewed By: janeyx99

Differential Revision: D27830127

Pulled By: samestep

fbshipit-source-id: d6dcf4f945ebd18cd76c46a07f3b408296864fcb
2021-04-19 13:16:18 -07:00

98 lines
2.7 KiB
Python

import operator_benchmark as op_bench
import torch
import random
from typing import List
"""Microbenchmarks for Stack operator"""
# Configs for PT stack operator
stack_configs_static_runtime = op_bench.config_list(
attr_names=['sizes', 'N'],
attrs=[
[(20, 40), 5],
[(1, 40), 5],
],
cross_product_configs={
'device': ['cpu', 'cuda'],
'dim': list(range(3))
},
tags=['static_runtime'],
)
stack_configs_short = op_bench.config_list(
attr_names=['sizes', 'N'],
attrs=[
[(1, 1, 1), 2], # noqa: E241
[(512, 512, 2), 2], # noqa: E241
[(128, 1024, 2), 2], # noqa: E241
],
cross_product_configs={
'device': ['cpu', 'cuda'],
'dim': list(range(4))
},
tags=['short'],
)
stack_configs_long = op_bench.config_list(
attr_names=['sizes', 'N'],
attrs=[
[(2**10, 2**10, 2), 2], # noqa: E241
[(2**10+1, 2**10-1, 2), 2], # noqa: E226,E241
[(2**10, 2**10, 2), 2], # noqa: E241
],
cross_product_configs={
'device': ['cpu', 'cuda'],
'dim': list(range(4))
},
tags=['long'],
)
# There is a different codepath on CUDA for >4 dimensions
stack_configs_multidim = op_bench.config_list(
attr_names=['sizes', 'N'],
attrs=[
[(2**6, 2**5, 2**2, 2**4, 2**5), 2], # noqa: E241
[(2**4, 2**5, 2**2, 2**4, 2**5), 8], # noqa: E241
[(2**3+1, 2**5-1, 2**2+1, 2**4-1, 2**5+1), 17], # noqa: E226,E241
],
cross_product_configs={
'device': ['cpu', 'cuda'],
'dim': list(range(6))
},
tags=['multidim'],
)
class StackBenchmark(op_bench.TorchBenchmarkBase):
def init(self, sizes, N, dim, device):
random.seed(42)
inputs = []
gen_sizes = []
if type(sizes) == list and N == -1:
gen_sizes = sizes
else:
for i in range(N):
gen_sizes.append([old_size() if callable(old_size) else old_size for old_size in sizes])
for s in gen_sizes:
inputs.append(torch.rand(s, device=device))
result = torch.rand(gen_sizes[0], device=device)
self.inputs = {
"result": result,
"inputs": inputs,
"dim": dim
}
self.set_module_name('stack')
def forward(self, result: torch.Tensor, inputs: List[torch.Tensor], dim: int):
return torch.stack(inputs, dim=dim, out=result)
op_bench.generate_pt_test(stack_configs_static_runtime +
stack_configs_short +
stack_configs_long +
stack_configs_multidim,
StackBenchmark)
if __name__ == "__main__":
op_bench.benchmark_runner.main()