pytorch/torch/quantization/fx
Angela Yi cc03ea2c47 [quant] Implemented InputWeightObserver for Linear inputs
Summary: Implemented two observers (InputEqualObserver and WeightEqualObserver) which will be inserted into the graph during prepare_fx().

Test Plan: python test/test_quantization.py TestEqualizeFx

Reviewed By: supriyar

Differential Revision: D28836954

fbshipit-source-id: 25517dc82ae67698ed8b2dc334e3323286976104
2021-06-07 11:19:43 -07:00
..
__init__.py
_equalize.py [quant] Implemented InputWeightObserver for Linear inputs 2021-06-07 11:19:43 -07:00
convert.py [quant][graphmode][fx][refactor] Split quantize.py to prepare.py and convert.py (#59353) 2021-06-02 23:52:39 -07:00
fuse.py [quant][graphmode][fx][refactor] Split quantize.py to prepare.py and convert.py (#59353) 2021-06-02 23:52:39 -07:00
fusion_patterns.py
graph_module.py [quant][graphmode][fx] Support preserving attributes in deepcopy of observed/quantized graphmodule (#56550) 2021-04-22 15:02:44 -07:00
match_utils.py [quant][graphmode][fx][refactor] Split quantize.py to prepare.py and convert.py (#59353) 2021-06-02 23:52:39 -07:00
pattern_utils.py [quant][graphmode][fx][refactor] Split quantize.py to prepare.py and convert.py (#59353) 2021-06-02 23:52:39 -07:00
prepare.py [quant][graphmode][fx][refactor] Split quantize.py to prepare.py and convert.py (#59353) 2021-06-02 23:52:39 -07:00
qconfig_utils.py [reland][quant][fx][graphmode][refactor] Remove qconfig_map from Quantizer (#58455) (#58756) 2021-05-24 14:57:45 -07:00
quantization_patterns.py [quant][graphmode][fx][fix] Fix support for custom module (#59041) 2021-06-01 22:31:15 -07:00
quantization_types.py [quant][graphmode][fx][refactor] Split quantize.py to prepare.py and convert.py (#59353) 2021-06-02 23:52:39 -07:00
quantize.py [quant][graphmode][fx][refactor] Split quantize.py to prepare.py and convert.py (#59353) 2021-06-02 23:52:39 -07:00
utils.py [quant][graphmode][fx][refactor] Split quantize.py to prepare.py and convert.py (#59353) 2021-06-02 23:52:39 -07:00