pytorch/test/quantization/fx
Angela Yi 9b94aa5356 [quant][fx][fix] Fused modules with object_type in qconfig (#60779)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/60779

When we do fusion, we replace certain modules (such as Linear + ReLU) with fused versions (such as LinearReLU) by calling `_fuse_fx` in prepare_fx. However when we try to look up using the fused module type in qconfig_dict, we cannot find a match anymore since the qconfig dict contains the original module types. An example is here [N882873](https://fburl.com/anp/azenjx3v).

So we will now update the qconfig_dict to include the fused modules mapping to the qconfigs used for the modules that make up the fused modules. If the modules are not mapped to the same qconfig, then we will raise an error.

Test Plan:
`python test/test_quantization.py TestFuseFx.test_qconfig_fused_module`

Imported from OSS

Reviewed By: supriyar

Differential Revision: D29406941

fbshipit-source-id: 74b5db89f4998aeb02b2bf7c37bf97326580c654
2021-06-28 15:22:22 -07:00
..
__init__.py
test_equalize_fx.py [quant] Input-Weight Equalization - tests (#60378) 2021-06-28 10:44:29 -07:00
test_numeric_suite_fx.py ns for fx: fix shadow logger error for resnet18 (#60559) 2021-06-24 13:42:18 -07:00
test_quantize_fx.py [quant][fx][fix] Fused modules with object_type in qconfig (#60779) 2021-06-28 15:22:22 -07:00