pytorch/torch/quantization/fx
Vasiliy Kuznetsov f15ab8a7f2 AO migration: replace torch internal callsites (#94170)
Summary:

Do the following renames:
`torch.quantization` -> `torch.ao.quantization`
`torch.nn.quantized` -> `torch.ao.nn.quantized`
`torch.nn.quantizable` -> `torch.ao.nn.quantizable`
`torch.nn.qat` -> `torch.ao.nn.qat`
`torch.nn.intrinsic` -> `torch.ao.nn.intrinsic`

And then, do
`torch.ao.nn.quantized._reference` -> `torch.ao.nn.quantized.reference` to clean up the aftermath of https://github.com/pytorch/pytorch/pull/84974

Then, manually update `test/test_module_init.py` to fix hanging whitespace due to the replace.

Run this script to do the replacements: https://gist.github.com/vkuzo/7f7afebf8c31b9ba48306223e68a1c82

This is for https://github.com/pytorch/pytorch/issues/81667

Test plan: CI
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94170
Approved by: https://github.com/jerryzh168
2023-02-07 02:32:23 +00:00
..
__init__.py
_equalize.py
convert.py
fuse.py
fusion_patterns.py
graph_module.py [ao][fx] fixing public v private graph_module.py (#88395) 2022-12-15 02:15:04 +00:00
match_utils.py
pattern_utils.py AO migration: replace torch internal callsites (#94170) 2023-02-07 02:32:23 +00:00
prepare.py
quantization_patterns.py AO migration: replace torch internal callsites (#94170) 2023-02-07 02:32:23 +00:00
quantization_types.py
utils.py