pytorch/torch
Tongzhou Wang af638ad5d7 pin_memory should not copy on already pinned tensors (#23484)
Summary:
fixes https://github.com/pytorch/pytorch/issues/21076
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23484

Differential Revision: D16546264

Pulled By: ezyang

fbshipit-source-id: 8058e0bbc6336751f36b884d71234feef498a982
2019-07-30 21:16:23 -07:00
..
_thnn
autograd Added torch.autograd.profiler.record_function() as context manager. (#23428) 2019-07-30 11:10:01 -07:00
backends
contrib
csrc pin_memory should not copy on already pinned tensors (#23484) 2019-07-30 21:16:23 -07:00
cuda Let set_rng_state and get_rng_state accept string parameter (#23448) 2019-07-29 08:08:39 -07:00
distributed make OMP_NUM_THREADS default in launch.py (#22501) 2019-07-23 16:14:24 -07:00
distributions Fix distributions.Categorical.sample bug from .view() (#23328) 2019-07-29 12:09:50 -07:00
for_onnx
jit Include recursive class compilations in error call stack (#23454) 2019-07-30 17:29:54 -07:00
legacy
lib Remove superfluous check (#23370) 2019-07-25 11:26:16 -07:00
multiprocessing Add multiprocessing_context= argument to DataLoader (#22990) 2019-07-29 12:58:40 -07:00
nn Quantized Average Pool kernel 2019-07-30 10:51:25 -07:00
onnx ONNX export for index_select (#21866) 2019-07-26 13:56:15 -07:00
optim Renamed CosineAnnealingLr to CosineAnnealingLR (#23242) 2019-07-23 14:54:15 -07:00
quantization Change condition in swap module 2019-07-30 17:25:02 -07:00
sparse
testing Fix get_all_math_dtypes for device='cuda' retuning None (#23028) 2019-07-19 09:29:16 -07:00
utils Add multiprocessing_context= argument to DataLoader (#22990) 2019-07-29 12:58:40 -07:00
__config__.py
__future__.py Add torch.__future__._overwrite_module_params_on_conversion global flag, and check it in nn.Module._apply() (#21613) 2019-06-19 10:30:02 -07:00
__init__.py Fusion and _intrinsic modules (#23003) 2019-07-23 14:54:19 -07:00
__init__.pyi.in pin_memory should not copy on already pinned tensors (#23484) 2019-07-30 21:16:23 -07:00
_jit_internal.py Add initial support for serializing classes 2019-07-19 14:51:59 -07:00
_ops.py Make traced fns also go into the global python CU 2019-07-16 12:04:16 -07:00
_six.py
_storage_docs.py Enabled BFloat16 storage (#21523) 2019-07-09 21:51:06 -07:00
_tensor_docs.py pin_memory should not copy on already pinned tensors (#23484) 2019-07-30 21:16:23 -07:00
_tensor_str.py Added Bfloat16 tensor for cpu with very limited support (#21860) 2019-07-10 09:08:52 -07:00
_torch_docs.py Rename gels to lstsq (#23460) 2019-07-30 09:56:04 -07:00
_utils.py Catch and print exception traceback in parallel_apply() workers (#18055) 2019-07-26 11:41:22 -07:00
_utils_internal.py
abi-check.cpp
CMakeLists.txt PyTorch export to ONNX Opset 7 and 8 - Cont (#22421) 2019-07-12 14:52:48 -07:00
extension.h
functional.py Rename gels to lstsq (#23460) 2019-07-30 09:56:04 -07:00
hub.py
py.typed
quasirandom.py
random.py Refactor Random Number Generators in ATen (#21555) 2019-06-19 13:54:09 -07:00
README.txt
script.h
serialization.py fix error message 2019-07-18 23:38:55 -07:00
storage.py Enabled BFloat16 storage (#21523) 2019-07-09 21:51:06 -07:00
tensor.py pin_memory should not copy on already pinned tensors (#23484) 2019-07-30 21:16:23 -07:00

Note [TH abstraction violation]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

TH/THC provide some hpp headers, which are proper C++ headers rather than
C headers.  These headers serve double duty as *internal implementation
detail* headers, whose contents should largely not be used by external
clients.

Ideally, we would not install these headers at all; instead, you should
use public functions (in headers like `THTensor.h`, NOT `THTensor.hpp`)
to manipulate these structs.  However, there are a few places
in torch/csrc where we violate this abstraction.  They are marked with
a pointer to this note.  Each of those sites will have to be refactored
when we refactor the guts of THTensor and related structures.