pytorch/test/quantization
Supriya Rao 4c3f59b70e [quant][fx] Make scale, zero_point buffers in the model and use FQN (for quantized ops) (#51166)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51166

Currently scale and zero_point values are stored as constant values in the graph.
This prevents these values from being updated in the graph and also does not enable saving
these values to state_dict

After this PR we store scale/zero_point values for quantized ops as buffers in the root module
and createe get_attr nodes for them in the graph.

We also use the FQN of the module where the quantized ops are present to name these attributes so
that they can be uniquely  identified and mapped to quantized ops.

Test Plan:
python test/test_quantization.py TestQuantizeFx.test_qparams_buffers

Imported from OSS

Reviewed By: jerryzh168

Differential Revision: D26092965

fbshipit-source-id: b549b2d3dccb45c5d38415ce95a09c26f5bd590b
2021-01-28 08:35:42 -08:00
..
serialized Adding a version serialization type to ConvPackedParam (#43086) 2020-08-28 15:41:30 -07:00
__init__.py
test_backward_compatibility.py Adding a version serialization type to ConvPackedParam (#43086) 2020-08-28 15:41:30 -07:00
test_bias_correction.py Bias Correction Implementation (#41845) 2020-08-20 21:40:33 -07:00
test_equalize.py
test_fusion_passes.py
test_numeric_suite.py [quantization] fix run_arg tiny bug (#48537) 2020-12-02 10:07:33 -08:00
test_numeric_suite_fx.py Compare Weights FX Implementation (#48056) 2020-11-20 17:17:19 -08:00
test_qat_module.py [reland][quant][fix] Add bias once in conv_fused (#48593) (#48661) 2020-12-02 10:17:43 -08:00
test_quantize.py quantization: Linear + BatchNorm1d fusion (#50748) 2021-01-20 12:59:02 -08:00
test_quantize_fx.py [quant][fx] Make scale, zero_point buffers in the model and use FQN (for quantized ops) (#51166) 2021-01-28 08:35:42 -08:00
test_quantize_jit.py Clean up some type annotations in caffe2/test (#49943) 2021-01-13 10:01:55 -08:00
test_quantized_functional.py [reland][quant] Remove nn.quantized.ReLU module and nn.quantized.functional.relu (#47415) (#48038) 2020-11-17 09:52:21 -08:00
test_quantized_module.py [quant] Mapping for the _LinearWithBias (#49964) 2021-01-07 13:57:29 -08:00
test_quantized_op.py Back out "Revert D25903846: [pytorch][PR] Structured kernel definition for upsample_nearest2d" (#50794) 2021-01-25 10:43:53 -08:00
test_quantized_tensor.py [quant] PerChannelFloatQParams support for quint4x2 dtype (#45594) 2020-10-01 23:59:53 -07:00
test_workflow_module.py fake_quant: add a more memory efficient version (#50561) 2021-01-27 19:36:04 -08:00