mirror of
https://github.com/saymrwulf/pytorch.git
synced 2026-05-14 20:57:59 +00:00
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/26624 For QAT we need to be able to control batch norm for all modules from the top. Adding helper functions to enable/disable batch norm freezing during training ghstack-source-id: 91008297 Test Plan: buck test caffe2/test:quantization -- --print-passing-details Differential Revision: D17512199 fbshipit-source-id: f7b981e2b1966ab01c4dbb161030177274a998b6 |
||
|---|---|---|
| .. | ||
| _intrinsic | ||
| backends | ||
| modules | ||
| parallel | ||
| qat | ||
| quantized | ||
| utils | ||
| __init__.py | ||
| __init__.pyi | ||
| _reduction.py | ||
| _VF.py | ||
| common_types.pyi | ||
| cpp.py | ||
| functional.py | ||
| functional.pyi.in | ||
| grad.py | ||
| init.py | ||
| parameter.py | ||
| parameter.pyi | ||