pytorch/torch/quantization
Vasiliy Kuznetsov f80aaadbae fx quantization: add option to leave graph inputs and/or outputs quantized (#48624)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/48624

Before this PR, there was an assumption that all graph inputs
and outputs are in floating point, with some exceptions for
`standalone_module`.

This PR adds an option to specify either inputs or outputs
as being quantized.

This is useful for incremental migrations of models using Eager mode.

Test Plan: Imported from OSS

Reviewed By: jerryzh168

Differential Revision: D25231833

fbshipit-source-id: 9f9da17be72b614c4c334f5c588458b3e726ed17
2020-12-01 10:39:51 -08:00
..
fx fx quantization: add option to leave graph inputs and/or outputs quantized (#48624) 2020-12-01 10:39:51 -08:00
__init__.py [quant][fix] Fix quant type classification for float_qparam qconfig (#48069) 2020-11-18 18:22:08 -08:00
_correct_bias.py
_equalize.py
_learnable_fake_quantize.py
_numeric_suite.py T78750158 Support varying size input in numeric suite at 10/30/2020, 3:55:01 PM (#47391) 2020-11-18 23:57:41 -08:00
_numeric_suite_fx.py Compare Weights FX Implementation (#48056) 2020-11-20 17:17:19 -08:00
fake_quantize.py [quant] FakeQuantize inherit from FakeQuantizeBase (#48072) 2020-11-18 19:14:20 -08:00
fuse_modules.py
fuser_method_mappings.py [quant][refactor] factor out get_combined_dict function (#47781) 2020-11-11 21:01:31 -08:00
observer.py [quant][fix] Fix quant type classification for float_qparam qconfig (#48069) 2020-11-18 18:22:08 -08:00
qconfig.py [quant][fix] Fix quant type classification for float_qparam qconfig (#48069) 2020-11-18 18:22:08 -08:00
quant_type.py
quantization_mappings.py [reland][quant] Remove nn.quantized.ReLU module and nn.quantized.functional.relu (#47415) (#48038) 2020-11-17 09:52:21 -08:00
quantize.py [quant] FakeQuantize inherit from FakeQuantizeBase (#48072) 2020-11-18 19:14:20 -08:00
quantize_fx.py fx quantization: add option to leave graph inputs and/or outputs quantized (#48624) 2020-12-01 10:39:51 -08:00
quantize_jit.py [TorchScript] Support user defined classes as constants (#5062) 2020-11-16 20:52:02 -08:00
stubs.py
utils.py [quant][refactor] Move some util functions from torch/quantization/fx/utils.py to torch/quantization/utils.py (#48107) 2020-11-18 22:32:19 -08:00