pytorch/torch
Vitaly Fedyunin 7b2e8c323c Add memory format argument to the clone operator (#27106)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27106

Adds memory_format option to the `clone` operator.

Introduce new `clone` behavior if used with `input_t.clone(memory_format=torch.preserve_format)`:
1) If tensor is non-overlapping and dense - output tensor will have the same strides as input tensor.
2) If not (1) and tensor is stored in the channels last format, output tensor going to have channels last format.
3) Output tensor is going to be contiguous in all other cases.

 ---
Dense tensor is the tensor that store values in a contiguous block of memory.
Non-overlapping tensor is the tensor in which elements occupy individual non-repetitive memory.

Test Plan: Imported from OSS

Differential Revision: D17699357

Pulled By: VitalyFedyunin

fbshipit-source-id: 5ae1537c2aca1abf0bf1eec4416846129c156f66
2019-10-03 12:08:47 -07:00
..
autograd Add warning to anomaly_mode doc fix #26408 (#26615) 2019-09-24 07:27:39 -07:00
backends Remove fbgemm_is_cpu_supported in favor of torch.backends.quantized.supported_qengines (#26840) 2019-09-27 13:45:15 -07:00
contrib
csrc Add memory format argument to the clone operator (#27106) 2019-10-03 12:08:47 -07:00
cuda
distributed make python udf serialization format to be binary plus tensor tables (#27136) 2019-10-02 00:10:32 -07:00
distributions
for_onnx
jit Make cpp-backed jit classes appear as being in torch.jit 2019-10-03 08:28:36 -07:00
legacy
lib Add bitwise distributed reduction ops (#26824) 2019-09-26 08:09:49 -07:00
multiprocessing
nn Fix reprs for _intrinsic modules 2019-10-02 19:55:49 -07:00
onnx Add memory format argument to the clone operator (#27106) 2019-10-03 12:08:47 -07:00
optim fix type annotation 2019-09-27 13:39:36 -07:00
quantization Factored out the default mappings 2019-10-03 11:52:21 -07:00
sparse
testing
utils Make cpp-backed jit classes appear as being in torch.jit 2019-10-03 08:28:36 -07:00
__config__.py
__future__.py
__init__.py Rename _intrinsic to intrinsic 2019-10-02 18:53:06 -07:00
__init__.pyi.in
_classes.py
_jit_internal.py Make cpp-backed jit classes appear as being in torch.jit 2019-10-03 08:28:36 -07:00
_namedtensor_internals.py Better named tensor error messages. (#26974) 2019-09-27 14:12:36 -07:00
_ops.py
_six.py
_storage_docs.py
_tensor_docs.py Per-channel quantized tensor to have only a single axis (#26675) 2019-09-23 22:29:01 -07:00
_tensor_str.py Per-channel quantized tensor to have only a single axis (#26675) 2019-09-23 22:29:01 -07:00
_torch_docs.py Add torch.promote_types function 2019-09-27 16:48:38 -07:00
_utils.py Serialize XLA Tensor (#27041) 2019-10-01 15:05:30 -07:00
_utils_internal.py
abi-check.cpp
CMakeLists.txt Add send and recv backward functions for builtin operators RPC. (#25527) 2019-10-03 01:18:46 -07:00
custom_class.h
extension.h
functional.py Fixed Error message for tensor.align_to (#27221) 2019-10-02 14:19:40 -07:00
hub.py Automatically select proper tqdm submodule (#27108) 2019-10-01 05:34:08 -07:00
py.typed
quasirandom.py
random.py
README.txt
script.h
serialization.py torch.load default encoding change to 'utf-8' (#26421) 2019-09-25 14:59:02 -07:00
storage.py
tensor.py Serialize XLA Tensor (#27041) 2019-10-01 15:05:30 -07:00

Note [TH abstraction violation]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

TH/THC provide some hpp headers, which are proper C++ headers rather than
C headers.  These headers serve double duty as *internal implementation
detail* headers, whose contents should largely not be used by external
clients.

Ideally, we would not install these headers at all; instead, you should
use public functions (in headers like `THTensor.h`, NOT `THTensor.hpp`)
to manipulate these structs.  However, there are a few places
in torch/csrc where we violate this abstraction.  They are marked with
a pointer to this note.  Each of those sites will have to be refactored
when we refactor the guts of THTensor and related structures.