mirror of
https://github.com/saymrwulf/pytorch.git
synced 2026-05-15 21:00:47 +00:00
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/42034 In this diff, scale and zero point gradient calculations are updated to correctly reflect the actual backpropagation equation (instead of `dScale * dX`, the near-final output should be `dScale * dY`; the same applies to zero point). Test Plan: To execute the unit tests for all affected learnable fake quantize modules and kernels, on a devvm, execute the following command: `buck test //caffe2/test:quantization -- learnable` To enable the `cuda` tests, execute the following command: `buck test mode/dev-nosan //caffe2/test:quantization -- learnable` Reviewed By: jerryzh168 Differential Revision: D22735668 fbshipit-source-id: 45c1e0fd38cbb2d8d5e60be4711e1e989e9743b4 |
||
|---|---|---|
| .. | ||
| __init__.py | ||
| _equalize.py | ||
| _learnable_fake_quantize.py | ||
| _numeric_suite.py | ||
| default_mappings.py | ||
| fake_quantize.py | ||
| fuse_modules.py | ||
| observer.py | ||
| qconfig.py | ||
| quantize.py | ||
| quantize_jit.py | ||
| stubs.py | ||