pytorch/torch
James Reed 173dc5d16f __reduce__ for QScheme (#24969)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24969

This allows pickling qscheme objects

Test Plan: Imported from OSS

Differential Revision: D16946567

Pulled By: jamesr66a

fbshipit-source-id: 57dbedb1e1aca2a2e17916eed662f727053ea926
2019-08-21 19:08:54 -07:00
..
_thnn
autograd Added torch.autograd.profiler.record_function() as context manager. (#23428) 2019-07-30 11:10:01 -07:00
backends
contrib Remove torch.contrib._graph_vis (#24874) 2019-08-21 10:48:07 -07:00
csrc __reduce__ for QScheme (#24969) 2019-08-21 19:08:54 -07:00
cuda Let set_rng_state and get_rng_state accept string parameter (#23448) 2019-07-29 08:08:39 -07:00
distributed throw remote exception on client side (#24138) 2019-08-20 09:40:35 -07:00
distributions Vectorize LowerCholeskyTransform (#24131) 2019-08-15 06:46:19 -07:00
for_onnx
jit Misc doc updates #2 (#24445) 2019-08-21 16:45:19 -07:00
legacy
lib Revert D16220638: [pytorch][PR] Detect and handle NCCL errors appropriately in ProcessGroupNCCL. 2019-08-21 09:40:38 -07:00
multiprocessing Add multiprocessing_context= argument to DataLoader (#22990) 2019-07-29 12:58:40 -07:00
nn Added relu6 kernel (#24799) 2019-08-21 13:57:00 -07:00
onnx Merge ProfiledTensorType and TensorType (#24284) 2019-08-20 13:01:28 -07:00
optim Add epsilon argument to Adagrad optimizer (#24980) 2019-08-21 16:36:51 -07:00
quantization extra_repr for quantized modules (#24443) 2019-08-16 22:38:45 -07:00
sparse
testing Fix get_all_math_dtypes for device='cuda' retuning None (#23028) 2019-07-19 09:29:16 -07:00
utils Remove support for old architectures in cpp_extension and CMake (#24442) 2019-08-19 06:23:33 -07:00
__config__.py
__future__.py Add torch.__future__._overwrite_module_params_on_conversion global flag, and check it in nn.Module._apply() (#21613) 2019-06-19 10:30:02 -07:00
__init__.py Updated docs and added deprecation warnings to acknowledge a bool tensor (#22261) 2019-08-05 07:42:34 -07:00
__init__.pyi.in Updated docs and added deprecation warnings to acknowledge a bool tensor (#22261) 2019-08-05 07:42:34 -07:00
_classes.py Initial torchbind prototype (#21098) 2019-08-02 18:45:15 -07:00
_jit_internal.py Misc doc updates #2 (#24445) 2019-08-21 16:45:19 -07:00
_ops.py Initial torchbind prototype (#21098) 2019-08-02 18:45:15 -07:00
_six.py Finished the high-priority functions (#21127) 2019-06-04 17:59:05 -07:00
_storage_docs.py Enabled BFloat16 storage (#21523) 2019-07-09 21:51:06 -07:00
_tensor_docs.py Documentation for Tensor.record_stream() (#24078) 2019-08-16 08:07:33 -07:00
_tensor_str.py Add names to repr for named tensors 2019-08-02 11:37:29 -07:00
_torch_docs.py Test if descriptions of args are in the template (#24161) 2019-08-20 16:34:50 -07:00
_utils.py Catch and print exception traceback in parallel_apply() workers (#18055) 2019-07-26 11:41:22 -07:00
_utils_internal.py
abi-check.cpp
CMakeLists.txt Revert D16914345: [pytorch][PR] Move the detection of cuDNN to FindCUDNN.cmake 2019-08-20 14:23:12 -07:00
custom_class.h search class type for methods (#23689) 2019-08-12 20:29:45 -07:00
extension.h
functional.py Implement tensor.align_to(names), torch.align_tensors(*tensors) (#23804) 2019-08-14 09:40:27 -07:00
hub.py Use dst dir for temp file (#23629) 2019-07-31 19:04:03 -07:00
namedtensor.py Update tensor.view_names / tensor.names_ API (#23973) 2019-08-14 09:40:35 -07:00
py.typed
quasirandom.py Make SobolEngine use random seed if not specified (#24884) 2019-08-20 09:22:41 -07:00
random.py Refactor Random Number Generators in ATen (#21555) 2019-06-19 13:54:09 -07:00
README.txt
script.h Add Pickler C++ API (#23241) 2019-08-12 14:43:31 -07:00
serialization.py fix error message 2019-07-18 23:38:55 -07:00
storage.py Enabled BFloat16 storage (#21523) 2019-07-09 21:51:06 -07:00
tensor.py Update tensor.view_names / tensor.names_ API (#23973) 2019-08-14 09:40:35 -07:00

Note [TH abstraction violation]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

TH/THC provide some hpp headers, which are proper C++ headers rather than
C headers.  These headers serve double duty as *internal implementation
detail* headers, whose contents should largely not be used by external
clients.

Ideally, we would not install these headers at all; instead, you should
use public functions (in headers like `THTensor.h`, NOT `THTensor.hpp`)
to manipulate these structs.  However, there are a few places
in torch/csrc where we violate this abstraction.  They are marked with
a pointer to this note.  Each of those sites will have to be refactored
when we refactor the guts of THTensor and related structures.