pytorch/test/quantization
Supriya Rao 7cec4b3d4a [quant][fx] add _remove_qconfig flag to convert_fx (#53166)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53166

Context: For fx modules that consist of scriptmodules, calling
delattr(module, 'qconfig') throws an attribute error. will follow up
with a separate issue/repro to fix this problem

This PR adds a temporary flag to convert_fx API to preserve the qconfig attributes on the converted model
We will remove this flag once we reach a conclusion on calling delattr on scriptmodules

Test Plan:
python test/test_quantization.py test_preserve_qconfig

Imported from OSS

Reviewed By: jerryzh168

Differential Revision: D26771518

fbshipit-source-id: 9fd72816576856ffb4aa11f8fde08303d1df10a2
2021-03-03 12:58:05 -08:00
..
serialized
__init__.py
test_backward_compatibility.py
test_bias_correction.py Numeric Suite: Swap with shadow modules only for quantized part of model (#51052) 2021-02-04 11:40:30 -08:00
test_equalize.py
test_fusion_passes.py
test_numeric_suite.py Numeric Suite: Swap with shadow modules only for quantized part of model (#51052) 2021-02-04 11:40:30 -08:00
test_numeric_suite_fx.py ns for fx: remove model_name from get_matching_activations API (#52926) 2021-03-01 08:56:18 -08:00
test_qat_module.py
test_quantize.py quant: fix conv transpose with qconfig == None (#52844) 2021-02-25 11:52:30 -08:00
test_quantize_fx.py [quant][fx] add _remove_qconfig flag to convert_fx (#53166) 2021-03-03 12:58:05 -08:00
test_quantize_jit.py split quantization jit op (#51329) 2021-01-29 07:49:53 -08:00
test_quantized_functional.py
test_quantized_module.py [quant] Reference option for conv module (#52316) 2021-02-24 14:54:02 -08:00
test_quantized_op.py [quant] Quantizable MultiheadAttention (#49866) 2021-02-17 12:36:30 -08:00
test_quantized_tensor.py
test_workflow_module.py Fake Quantization support for f16 and f32 (#52612) 2021-02-23 10:49:12 -08:00