pytorch/torch
Nitish Awasthi 64965c4572 Replaced blacklist with blocklist (#42097)
Summary:
Closes https://github.com/pytorch/pytorch/issues/41726

Fixes https://github.com/pytorch/pytorch/issues/41726

Pull Request resolved: https://github.com/pytorch/pytorch/pull/42097

Reviewed By: ngimel

Differential Revision: D22779535

Pulled By: SplitInfinity

fbshipit-source-id: 1d414af22a1b3e856a11d64cff4b4d33160d957b
2020-07-28 12:08:54 -07:00
..
_C Add done() API to Future (#42013) 2020-07-24 14:13:41 -07:00
autograd typo fixes (#41632) 2020-07-20 07:23:00 -07:00
backends
contrib remediation of S205607 2020-07-17 17:19:47 -07:00
csrc Replaced blacklist with blocklist (#42097) 2020-07-28 12:08:54 -07:00
cuda typo fixes (#41632) 2020-07-20 07:23:00 -07:00
distributed typo fixes (#41632) 2020-07-20 07:23:00 -07:00
distributions
for_onnx
futures Add done() API to Future (#42013) 2020-07-24 14:13:41 -07:00
jit DOC: split quantization.rst into smaller pieces (#41321) 2020-07-25 23:59:40 -07:00
legacy
lib Back out "[NCCL] DDP communication hook: getFuture()" (#42152) 2020-07-28 10:05:35 -07:00
multiprocessing
nn Raise RuntimeError for zero stride pooling (#41819) 2020-07-28 11:07:12 -07:00
onnx [ONNX] Add pass that fuses Conv and BatchNormalization (#40547) 2020-07-22 14:59:27 -07:00
optim Avoid zero division in _cubic_interpolate (#42093) 2020-07-28 08:32:00 -07:00
quantization Updates to Scale and Zero Point Gradient Calculation (#42034) 2020-07-27 11:18:49 -07:00
sparse typo fixes (#41632) 2020-07-20 07:23:00 -07:00
testing Fix the issue GPU skip message(#41378) (#41973) 2020-07-28 08:28:31 -07:00
utils Allow drop_last option in DistributedSampler (#41171) 2020-07-28 11:33:08 -07:00
__config__.py
__future__.py
__init__.py Grammar Changes (#42076) 2020-07-26 13:53:41 -07:00
_appdirs.py
_classes.py
_jit_internal.py [1/N] Implement Enum JIT support (#41390) 2020-07-18 22:15:06 -07:00
_linalg_utils.py
_lobpcg.py typo fixes (#41632) 2020-07-20 07:23:00 -07:00
_lowrank.py
_namedtensor_internals.py
_ops.py
_overrides.py Add torch.movedim (#41480) 2020-07-23 09:41:01 -07:00
_six.py
_storage_docs.py
_tensor_docs.py Reland split (#41567) 2020-07-21 08:06:27 -07:00
_tensor_str.py
_torch_docs.py Add torch.movedim (#41480) 2020-07-23 09:41:01 -07:00
_utils.py
_utils_internal.py
_VF.py
_vmap_internals.py
abi-check.cpp
CMakeLists.txt Replace if(NOT ${var}) by if(NOT var) (#41924) 2020-07-23 15:49:20 -07:00
custom_class.h
custom_class_detail.h
extension.h
functional.py Add torch.atleast_{1d/2d/3d} (#41317) 2020-07-17 10:10:41 -07:00
hub.py typo fixes (#41632) 2020-07-20 07:23:00 -07:00
library.h
py.typed remediation of S205607 2020-07-17 17:19:47 -07:00
quasirandom.py
random.py
README.txt
script.h
serialization.py
storage.py
tensor.py
types.py

Note [TH abstraction violation]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

TH/THC provide some hpp headers, which are proper C++ headers rather than
C headers.  These headers serve double duty as *internal implementation
detail* headers, whose contents should largely not be used by external
clients.

Ideally, we would not install these headers at all; instead, you should
use public functions (in headers like `THTensor.h`, NOT `THTensor.hpp`)
to manipulate these structs.  However, there are a few places
in torch/csrc where we violate this abstraction.  They are marked with
a pointer to this note.  Each of those sites will have to be refactored
when we refactor the guts of THTensor and related structures.