pytorch/test/quantization
Jerry Zhang eb0971cfe9 [quant][pt2e][be] Remove _input_output_share_observers and _reuse_input_obs_or_fq from QuantizationAnnotation (#102854)
Summary:
att, after we support SharedQuantizationSpec we don't need these things anymore, this PR refactors the
uses of _input_output_share_observers to SharedQuantizationSpec

Test Plan:
```
buck2 test mode/opt caffe2/test:quantization_pt2e -- 'caffe2/test:quantization_pt2e'
```

Reviewed By: andrewor14

Differential Revision: D46301342

Pull Request resolved: https://github.com/pytorch/pytorch/pull/102854
Approved by: https://github.com/andrewor14
2023-06-03 07:31:09 +00:00
..
ao_migration
bc
core Enable quantized_max_pool3d (#101654) 2023-05-23 00:45:38 +00:00
eager
fx Add quantization lowering for nn.PixelShuffle and nn.PixelUnshuffle (#101926) 2023-05-24 19:33:26 +00:00
jit [BE] Move flatbuffer related python C bindings to script_init (#97476) 2023-03-28 17:56:32 +00:00
pt2e [quant][pt2e][be] Remove _input_output_share_observers and _reuse_input_obs_or_fq from QuantizationAnnotation (#102854) 2023-06-03 07:31:09 +00:00
serialized
__init__.py