pytorch/test/quantization
Max Ren d2033a0639 [quant][pt2e][xnnpack_quantizer] add support for linear_relu (#117052)
Add support for linear_relu annotation for XNNPACKQuantizer, this allows the input to linear and the output to relu to share the same quantization parameter.s

Differential Revision: [D52574086](https://our.internmc.facebook.com/intern/diff/D52574086/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/117052
Approved by: https://github.com/jerryzh168, https://github.com/digantdesai
2024-01-09 23:19:52 +00:00
..
ao_migration
bc
core [BE]: Update flake8 to v6.1.0 and fix lints (#116591) 2024-01-03 06:04:44 +00:00
eager [BE]: Update flake8 to v6.1.0 and fix lints (#116591) 2024-01-03 06:04:44 +00:00
fx [BE]: Enable F821 and fix bugs (#116579) 2024-01-01 08:40:46 +00:00
jit [BE]: Enable RUF015 codebase wide (#115507) 2023-12-11 15:51:01 +00:00
pt2e [quant][pt2e][xnnpack_quantizer] add support for linear_relu (#117052) 2024-01-09 23:19:52 +00:00
serialized
__init__.py