pytorch/torch
Supriya Rao 7cec4b3d4a [quant][fx] add _remove_qconfig flag to convert_fx (#53166)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53166

Context: For fx modules that consist of scriptmodules, calling
delattr(module, 'qconfig') throws an attribute error. will follow up
with a separate issue/repro to fix this problem

This PR adds a temporary flag to convert_fx API to preserve the qconfig attributes on the converted model
We will remove this flag once we reach a conclusion on calling delattr on scriptmodules

Test Plan:
python test/test_quantization.py test_preserve_qconfig

Imported from OSS

Reviewed By: jerryzh168

Differential Revision: D26771518

fbshipit-source-id: 9fd72816576856ffb4aa11f8fde08303d1df10a2
2021-03-03 12:58:05 -08:00
..
_C [Gradient Compression] Remove some low-level methods of GradBucket class (#53098) 2021-03-03 12:06:14 -08:00
autograd fix(docs): indent in docstring of key_averages (#53006) 2021-03-01 15:18:20 -08:00
backends
contrib
csrc [Gradient Compression] Remove some low-level methods of GradBucket class (#53098) 2021-03-03 12:06:14 -08:00
cuda
distributed [ZeroRedundancyOptimizer] Minor stub fix (#53165) 2021-03-03 10:15:10 -08:00
distributions Add sample validation for LKJCholesky.log_prob (#52763) 2021-02-25 16:12:29 -08:00
fft [doc] Fix documentations of torch functions (#52982) 2021-03-01 09:59:57 -08:00
for_onnx
futures
fx [WIP][FX] Optionally record stack traces when symtracing (#53081) 2021-03-03 12:30:43 -08:00
jit Add default arguments to cuda stream and events (#53025) 2021-03-02 14:37:24 -08:00
legacy
lib [Gradient Compression] Remove some low-level methods of GradBucket class (#53098) 2021-03-03 12:06:14 -08:00
linalg Implements torch.linalg.lstsq (#49093) 2021-03-02 19:00:07 -08:00
multiprocessing
nn Deduplicate shared params before constructing Reducer in DDP (#51929) 2021-03-03 10:13:24 -08:00
onnx
optim [optim] bugfix when all parameters have no grad (#52944) 2021-03-03 11:56:09 -08:00
package [package] catch exceptions from calling reduce function. (#53061) 2021-03-01 21:27:08 -08:00
profiler
quantization [quant][fx] add _remove_qconfig flag to convert_fx (#53166) 2021-03-03 12:58:05 -08:00
sparse
testing Make meta a device (getting rid of empty_meta) (#53143) 2021-03-03 11:24:13 -08:00
utils Add more datapipe to functional API (#53123) 2021-03-03 07:01:00 -08:00
__config__.py
__future__.py
__init__.py Back out "Revert D26753571: [pytorch][PR] add submodules to sys.modules so their attributes can be pickled" (#53127) 2021-03-02 14:46:56 -08:00
_appdirs.py
_autograd_functions.py
_classes.py
_deploy.py [package] Pull out _UnpicklerWrapper into PackageUnpickler (#53049) 2021-03-01 18:40:52 -08:00
_jit_internal.py
_linalg_utils.py
_lobpcg.py
_lowrank.py
_namedtensor_internals.py
_ops.py
_python_dispatcher.py
_six.py
_storage_docs.py
_tensor_docs.py
_tensor_str.py
_torch_docs.py [doc] Fix documentations of torch functions (#52982) 2021-03-01 09:59:57 -08:00
_utils.py Introduce mlc device (ML Compute device) to PyTorch's device list (#50634) 2021-02-24 22:39:11 -08:00
_utils_internal.py
_VF.py
_vmap_internals.py
abi-check.cpp
CMakeLists.txt Use touch() in pathlib for better compatibility on Windows (#52729) 2021-02-25 13:46:21 -08:00
custom_class.h Add a demo backend with compiler (#52603) 2021-02-26 11:53:34 -08:00
custom_class_detail.h
deploy.h
extension.h
functional.py [doc] Fix documentations of torch functions (#52982) 2021-03-01 09:59:57 -08:00
hub.py
library.h Make meta a device (getting rid of empty_meta) (#53143) 2021-03-03 11:24:13 -08:00
overrides.py Make meta a device (getting rid of empty_meta) (#53143) 2021-03-03 11:24:13 -08:00
py.typed
quasirandom.py
random.py
README.txt
script.h
serialization.py
storage.py
tensor.py Introduce mlc device (ML Compute device) to PyTorch's device list (#50634) 2021-02-24 22:39:11 -08:00
types.py

Note [TH abstraction violation]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

TH/THC provide some hpp headers, which are proper C++ headers rather than
C headers.  These headers serve double duty as *internal implementation
detail* headers, whose contents should largely not be used by external
clients.

Ideally, we would not install these headers at all; instead, you should
use public functions (in headers like `THTensor.h`, NOT `THTensor.hpp`)
to manipulate these structs.  However, there are a few places
in torch/csrc where we violate this abstraction.  They are marked with
a pointer to this note.  Each of those sites will have to be refactored
when we refactor the guts of THTensor and related structures.