pytorch/torch
Will Feng 27d4b34ea6 Add temporary torch::k{name} enum declarations (#27051)
Summary:
This PR adds temporary declarations for `torch::k{name}` enums, so that we can submit a PR to rename the enum usage in torchvision. And then, after the changes to torchvision is done, we can remove the temporary declarations in https://github.com/pytorch/pytorch/pull/26837 to officially move over to using `c10::variant` for enums.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27051

Differential Revision: D17672220

Pulled By: yf225

fbshipit-source-id: 4ae77634e8c7efa3404698f7c1a69177cbb5dab3
2019-09-30 13:38:29 -07:00
..
autograd Add warning to anomaly_mode doc fix #26408 (#26615) 2019-09-24 07:27:39 -07:00
backends Remove fbgemm_is_cpu_supported in favor of torch.backends.quantized.supported_qengines (#26840) 2019-09-27 13:45:15 -07:00
contrib
csrc Add temporary torch::k{name} enum declarations (#27051) 2019-09-30 13:38:29 -07:00
cuda Fix race in CUDA initialization (#25788) 2019-09-17 07:40:29 -07:00
distributed support re-creating/destroying process groups when some trainers recover after failures (#26912) 2019-09-27 16:16:58 -07:00
distributions Clarified ambiguous docstring in NegativeBinomial 2019-09-16 14:38:32 -07:00
for_onnx
jit Add Support to Dicts and Strings in ONNX for Inputs and Outputs (#25889) 2019-09-26 22:31:09 -07:00
legacy
lib Add bitwise distributed reduction ops (#26824) 2019-09-26 08:09:49 -07:00
multiprocessing Rename torch.namedtensor -> torch._namedtensor_internals (#26349) 2019-09-18 05:47:09 -07:00
nn Quantization aware training: Freeze batch norm support (#26624) 2019-09-30 00:37:03 -07:00
onnx Resubmit [pytorch][PR] [ONNX] Updating producer_version in exported O… (#27004) 2019-09-27 23:23:31 -07:00
optim fix type annotation 2019-09-27 13:39:36 -07:00
quantization Default observer and fake-quant for backends (#26627) 2019-09-30 00:37:11 -07:00
sparse
testing
utils Fixing the calling parameters of write_gif function of the moviepy. 2019-09-23 06:53:24 -07:00
__config__.py
__future__.py
__init__.py Import torch.quantization when one imports torch 2019-09-23 12:58:17 -07:00
__init__.pyi.in Add data field to Tensor pyi. (#26093) 2019-09-13 07:32:03 -07:00
_classes.py
_jit_internal.py Make is_optional check more robust (#26312) 2019-09-24 10:44:40 -07:00
_namedtensor_internals.py Better named tensor error messages. (#26974) 2019-09-27 14:12:36 -07:00
_ops.py
_six.py
_storage_docs.py
_tensor_docs.py Per-channel quantized tensor to have only a single axis (#26675) 2019-09-23 22:29:01 -07:00
_tensor_str.py Per-channel quantized tensor to have only a single axis (#26675) 2019-09-23 22:29:01 -07:00
_torch_docs.py Add torch.promote_types function 2019-09-27 16:48:38 -07:00
_utils.py Per-channel quantized tensor to have only a single axis (#26675) 2019-09-23 22:29:01 -07:00
_utils_internal.py Add a wrapper for inspect in JIT to produce better error message (#25415) 2019-09-14 21:27:51 -07:00
abi-check.cpp
CMakeLists.txt Update qengine flag in python to string (#26620) 2019-09-23 17:56:50 -07:00
custom_class.h
extension.h
functional.py Remove deprecated torch.gels (#26480) 2019-09-23 07:15:39 -07:00
hub.py Hub improvements (#26723) 2019-09-25 08:21:50 -07:00
py.typed
quasirandom.py
random.py
README.txt
script.h turn off autograd mode in android JNI wrapper (#26477) 2019-09-19 21:25:39 -07:00
serialization.py torch.load default encoding change to 'utf-8' (#26421) 2019-09-25 14:59:02 -07:00
storage.py
tensor.py Per-channel quantized tensor to have only a single axis (#26675) 2019-09-23 22:29:01 -07:00

Note [TH abstraction violation]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

TH/THC provide some hpp headers, which are proper C++ headers rather than
C headers.  These headers serve double duty as *internal implementation
detail* headers, whose contents should largely not be used by external
clients.

Ideally, we would not install these headers at all; instead, you should
use public functions (in headers like `THTensor.h`, NOT `THTensor.hpp`)
to manipulate these structs.  However, there are a few places
in torch/csrc where we violate this abstraction.  They are marked with
a pointer to this note.  Each of those sites will have to be refactored
when we refactor the guts of THTensor and related structures.