pytorch/torch
Alban Desmaison 032d6b0643 Revert D28112689: CUDA support in the CSR layout: constructors
Test Plan: revert-hammer

Differential Revision:
D28112689 (1416e57465)

Original commit changeset: f825cd4bce40

fbshipit-source-id: 421fc590797ac5fab6a55ac6f213361fbba7cd5b
2021-05-26 06:15:05 -07:00
..
_C Add parsing logic for Tuple[()] annotation (#58340) 2021-05-25 12:12:43 -07:00
ao
autograd Add no-grad inference mode note (#58513) 2021-05-25 13:06:54 -07:00
backends
contrib
cpu enable torch.cpu.amp.autocast (#57386) 2021-05-20 17:48:36 -07:00
csrc Revert D28112689: CUDA support in the CSR layout: constructors 2021-05-26 06:15:05 -07:00
cuda Expose cudaMemGetInfo (#58635) 2021-05-25 14:58:35 -07:00
distributed [c10d] Use pg wrapper in detailed debug mode (#58281) 2021-05-25 09:55:52 -07:00
distributions
fft
for_onnx
futures
fx Add parsing logic for Tuple[()] annotation (#58340) 2021-05-25 12:12:43 -07:00
jit Add parsing logic for Tuple[()] annotation (#58340) 2021-05-25 12:12:43 -07:00
legacy
lib [c10d] Introduce ProcessGroupWrapper (#58224) 2021-05-24 20:09:51 -07:00
linalg Clarifies cholesky_ex role and makes batched support a common string (#58217) 2021-05-17 05:23:06 -07:00
multiprocessing
nn [docs] Clarify batch_first behavior for nn.LSTM, nn.RNN, and nn.GRU (#58809) 2021-05-25 15:27:17 -07:00
onnx Add mish activation function (#58648) 2021-05-25 10:36:21 -07:00
optim refactor ASGD to use functional API (#58410) 2021-05-19 18:55:52 -07:00
package [torch][package] Fix importlib.resources.path for python <3.8.8 (#58718) 2021-05-21 19:16:54 -07:00
profiler
quantization Add mish activation function (#58648) 2021-05-25 10:36:21 -07:00
sparse
special
testing [numpy] torch.i0: promote integer inputs to float (#52735) 2021-05-25 22:02:00 -07:00
utils [resubmit] masked_scatter thrust->cub (#58865) 2021-05-25 11:00:50 -07:00
__config__.py
__future__.py
__init__.py add deterministic path for scatter_add_cuda for 1D tensors (#58761) 2021-05-23 21:36:02 -07:00
_appdirs.py
_autograd_functions.py
_classes.py
_deploy.py
_jit_internal.py Add parsing logic for Tuple[()] annotation (#58340) 2021-05-25 12:12:43 -07:00
_linalg_utils.py
_lobpcg.py
_lowrank.py
_namedtensor_internals.py
_ops.py
_python_dispatcher.py
_six.py
_storage_docs.py
_tensor.py Revert D28112689: CUDA support in the CSR layout: constructors 2021-05-26 06:15:05 -07:00
_tensor_docs.py [torch][repeat_interleave] Fix ambigious function call (#58881) 2021-05-25 00:31:32 -07:00
_tensor_str.py
_torch_docs.py [torch][repeat_interleave] Fix ambigious function call (#58881) 2021-05-25 00:31:32 -07:00
_utils.py
_utils_internal.py
_VF.py
_vmap_internals.py
abi-check.cpp
CMakeLists.txt
custom_class.h [PyTorch] Extract non-template parts of torch::class_ (#54548) 2021-05-25 14:47:00 -07:00
custom_class_detail.h [PyTorch] Extract non-template parts of torch::class_ (#54548) 2021-05-25 14:47:00 -07:00
deploy.h
extension.h
functional.py Added sublist support for torch.einsum (#56625) 2021-05-21 08:36:45 -07:00
hub.py
library.h
overrides.py Add mish activation function (#58648) 2021-05-25 10:36:21 -07:00
py.typed
quasirandom.py
random.py
README.txt
script.h
serialization.py
storage.py
types.py

Note [TH abstraction violation]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

TH/THC provide some hpp headers, which are proper C++ headers rather than
C headers.  These headers serve double duty as *internal implementation
detail* headers, whose contents should largely not be used by external
clients.

Ideally, we would not install these headers at all; instead, you should
use public functions (in headers like `THTensor.h`, NOT `THTensor.hpp`)
to manipulate these structs.  However, there are a few places
in torch/csrc where we violate this abstraction.  They are marked with
a pointer to this note.  Each of those sites will have to be refactored
when we refactor the guts of THTensor and related structures.