mirror of
https://github.com/saymrwulf/pytorch.git
synced 2026-05-15 21:00:47 +00:00
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/48624 Before this PR, there was an assumption that all graph inputs and outputs are in floating point, with some exceptions for `standalone_module`. This PR adds an option to specify either inputs or outputs as being quantized. This is useful for incremental migrations of models using Eager mode. Test Plan: Imported from OSS Reviewed By: jerryzh168 Differential Revision: D25231833 fbshipit-source-id: 9f9da17be72b614c4c334f5c588458b3e726ed17 |
||
|---|---|---|
| .. | ||
| fx | ||
| __init__.py | ||
| _correct_bias.py | ||
| _equalize.py | ||
| _learnable_fake_quantize.py | ||
| _numeric_suite.py | ||
| _numeric_suite_fx.py | ||
| fake_quantize.py | ||
| fuse_modules.py | ||
| fuser_method_mappings.py | ||
| observer.py | ||
| qconfig.py | ||
| quant_type.py | ||
| quantization_mappings.py | ||
| quantize.py | ||
| quantize_fx.py | ||
| quantize_jit.py | ||
| stubs.py | ||
| utils.py | ||