pytorch/torch/quantization
Paul Shao 5a6d88d503 Updates to Scale and Zero Point Gradient Calculation (#42034)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/42034

In this diff, scale and zero point gradient calculations are updated to correctly reflect the actual backpropagation equation (instead of `dScale * dX`, the near-final output should be `dScale * dY`; the same applies to zero point).

Test Plan:
To execute the unit tests for all affected learnable fake quantize modules and kernels, on a devvm, execute the following command:

`buck test //caffe2/test:quantization -- learnable`

To enable the `cuda` tests, execute the following command:

`buck test mode/dev-nosan //caffe2/test:quantization -- learnable`

Reviewed By: jerryzh168

Differential Revision: D22735668

fbshipit-source-id: 45c1e0fd38cbb2d8d5e60be4711e1e989e9743b4
2020-07-27 11:18:49 -07:00
..
__init__.py [quant][graphmode] Rename graph mode quantization API to quantize_jit (#40212) 2020-06-19 18:13:37 -07:00
_equalize.py cross_layer_equalization (#41685) 2020-07-22 08:39:23 -07:00
_learnable_fake_quantize.py Updates to Scale and Zero Point Gradient Calculation (#42034) 2020-07-27 11:18:49 -07:00
_numeric_suite.py Remove unused Logger in get_matching_activations (#41023) 2020-07-07 00:33:07 -07:00
default_mappings.py qat eager: remove unneeded modules (#40396) 2020-06-22 17:45:51 -07:00
fake_quantize.py
fuse_modules.py Quantization: preserving pre and post forward hooks (#37233) 2020-07-13 12:41:24 -07:00
observer.py [quant] Add Graph Mode Passes to quantize EmbeddingBag operators (#41612) 2020-07-23 18:54:59 -07:00
qconfig.py Fix several quantization documentation typos (#40567) 2020-07-07 09:45:23 -07:00
quantize.py Move qconfig removal into convert() (#41930) 2020-07-25 13:27:13 -07:00
quantize_jit.py [quant][graphmode] Enable inplace option for top level API (#40414) 2020-06-23 16:42:48 -07:00
stubs.py