pytorch/torch/nn
Raghuraman Krishnamoorthi 84ee8ace12 Quantization aware training: Freeze batch norm support (#26624)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26624

For QAT we need to be able to control batch norm for all modules from the top. Adding helper functions to enable/disable batch norm freezing during training
ghstack-source-id: 91008297

Test Plan: buck test caffe2/test:quantization -- --print-passing-details

Differential Revision: D17512199

fbshipit-source-id: f7b981e2b1966ab01c4dbb161030177274a998b6
2019-09-30 00:37:03 -07:00
..
_intrinsic Quantization aware training: Freeze batch norm support (#26624) 2019-09-30 00:37:03 -07:00
backends Remove Module._backend as it's not used anymore. 2019-08-29 15:43:49 -07:00
modules Renames tensor.renamed -> rename, tensor.names_ -> rename_ (#26548) 2019-09-22 15:38:26 -07:00
parallel Revert D16428208: [pytorch][PR] only scatter in forward if multi-device per process 2019-07-27 22:41:20 -07:00
qat Add intrinsic module mappings (#23753) 2019-08-15 09:37:24 -07:00
quantized Improve repr for quantized modules 2019-09-28 15:15:14 -07:00
utils Add device check before accessing data_ptr in PackLayer (#26056) 2019-09-12 19:25:42 -07:00
__init__.py
__init__.pyi Fixes #25454 2019-08-30 07:59:26 -07:00
_reduction.py
_VF.py
common_types.pyi Fix Typing Error for Padding with asymmetric signatures (#24895) 2019-08-20 14:14:12 -07:00
cpp.py
functional.py Update ONNX Export for Interpolate in Opset 11 (#26778) 2019-09-25 05:43:20 -07:00
functional.pyi.in
grad.py
init.py
parameter.py
parameter.pyi Fix typing on nn.Parameter (#25586) 2019-09-09 07:54:27 -07:00