pytorch/test/quantization
andrewor14 8242fb62a7 [quant][pt2e] Fix conv-bn weight + bias per channel QAT (#125208)
Summary: This commit fixes the pattern matching for conv-bn
during QAT fusion where both weight and bias are quantized per
channel. Previously this failed because weights and biases used
the same example kwargs for their scales and zero points,
causing these qparams to be tied during pattern matching.

Test Plan:
python test/test_quantization.py TestQuantizePT2EQAT_ConvBn1d.test_qat_conv_bn_per_channel_weight_bias
python test/test_quantization.py TestQuantizePT2EQAT_ConvBn2d.test_qat_conv_bn_per_channel_weight_bias

Reviewers: jerryzh168, angelayi

Subscribers: jerryzh168, angelayi, supriyar

Differential Revision: [D56740694](https://our.internmc.facebook.com/intern/diff/D56740694)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/125208
Approved by: https://github.com/angelayi
2024-04-30 18:12:25 +00:00
..
ao_migration Enable UFMT on all of test/quantization/ao_migration &bc (#123994) 2024-04-13 06:36:10 +00:00
bc Enable UFMT on all of test/quantization/ao_migration &bc (#123994) 2024-04-13 06:36:10 +00:00
core Add testing and fix weights_only load for quantized types and nn.Parameters with python attrs (#124330) 2024-04-23 04:13:26 +00:00
eager
fx Add testing and fix weights_only load for quantized types and nn.Parameters with python attrs (#124330) 2024-04-23 04:13:26 +00:00
jit Enable UFMT on all of test/quantization/jit &pt2e (#124010) 2024-04-14 06:07:23 +00:00
pt2e [quant][pt2e] Fix conv-bn weight + bias per channel QAT (#125208) 2024-04-30 18:12:25 +00:00
serialized
__init__.py