pytorch/test/quantization/eager
Kwanghoon An c404b2968c Support min/max carry over for eager mode from_float method (#127309)
Summary:
After QAT is completed or given pre-tuned weight observer via tunable PTQ algorithm, it should not over-write again with a given weight, at least for static QAT never.

Dynamic QAT also does not require to re-run weight observer again by design.

This is a fix

Test Plan: Signals

Differential Revision: D57747749

Pull Request resolved: https://github.com/pytorch/pytorch/pull/127309
Approved by: https://github.com/jerryzh168
2024-05-29 19:33:26 +00:00
..
__init__.py
test_bias_correction_eager.py
test_equalize_eager.py
test_fuse_eager.py
test_model_numerics.py
test_numeric_suite_eager.py
test_quantize_eager_ptq.py
test_quantize_eager_qat.py