pytorch/test/quantization
Riley Dulin d61815cb7d [torch][ao] Use returned model from Quantizer.transform_for_annotation in prepare_pt2e (#132893)
Summary:
The Quantizer subclass can return a new model from `transform_for_annotation`,
and this is common if it uses any ExportPass subclass which does not mutate in-place.

Use the returned model instead of assuming its the same.

Differential Revision: D60869676

Pull Request resolved: https://github.com/pytorch/pytorch/pull/132893
Approved by: https://github.com/jerryzh168
2024-08-12 17:23:19 +00:00
..
ao_migration Enable UFMT on all of test/quantization/ao_migration &bc (#123994) 2024-04-13 06:36:10 +00:00
bc Add None return type to init -- tests (#132352) 2024-08-01 15:44:51 +00:00
core Add None return type to init -- tests (#132352) 2024-08-01 15:44:51 +00:00
eager Add None return type to init -- tests (#132352) 2024-08-01 15:44:51 +00:00
fx Add None return type to init -- tests (#132352) 2024-08-01 15:44:51 +00:00
jit Add None return type to init -- tests (#132352) 2024-08-01 15:44:51 +00:00
pt2e [torch][ao] Use returned model from Quantizer.transform_for_annotation in prepare_pt2e (#132893) 2024-08-12 17:23:19 +00:00
serialized
__init__.py