pytorch/torch
Michael Dagitses b737629ff0 simplify op name determination into a single forward pass (#64261)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64261

Note that this does not preserve byte-for-byte compatibility with
existing names.

Test Plan:
* Rely on CI to catch gross errors.
* Merge after release cut to catch subtle issues.

Reviewed By: albanD

Differential Revision: D30700647

Pulled By: dagitses

fbshipit-source-id: 7b02f34b8fae3041240cc78fbc6bcae498c3acd4
2021-09-02 07:32:11 -07:00
..
_C Revert D30543236: Add python mode 2021-08-31 15:28:33 -07:00
ao [quant] AO migration of the quantize.py (#64086) 2021-08-29 20:30:01 -07:00
autograd Add forward AD support for custom Functions (#64061) 2021-09-01 14:33:09 -07:00
backends
contrib
cpu add operation list for AutocastCPU (#63534) 2021-08-30 19:30:33 -07:00
csrc simplify op name determination into a single forward pass (#64261) 2021-09-02 07:32:11 -07:00
cuda fix syntax error in bfloat16 PR (#64122) 2021-08-31 14:33:12 -07:00
distributed [DDP Comm Hook] Create a noop hook for performance debugging (#64344) 2021-09-01 17:36:22 -07:00
distributions
fft
for_onnx
futures
fx Fix TRTModule not adding outputs in order (#64418) 2021-09-02 01:36:23 -07:00
jit Remove outdated warning about RecursiveScriptModule not being copiable (#64085) 2021-08-31 21:31:32 -07:00
legacy
lib
linalg
multiprocessing
nn fix copy.deepcopy on LinearPackedParams (#64367) 2021-09-02 06:30:42 -07:00
onnx ENH Adds label_smoothing to cross entropy loss (#63122) 2021-08-29 23:33:04 -07:00
optim [DOC] improve docstring for Optimizer.state_dict (#63153) 2021-08-29 10:20:58 -07:00
package
profiler
quantization [quant][graphmode][fx] Add fbgemm backend_config_dict (#64288) 2021-09-01 16:32:43 -07:00
sparse
special
testing [DDP] Log num threads (#64072) 2021-09-01 18:36:15 -07:00
utils Make datasets in ConcatDataset not need to be sized (#64114) 2021-09-01 15:32:50 -07:00
__config__.py
__future__.py
__init__.py
_appdirs.py
_classes.py
_deploy.py
_jit_internal.py Fix bug in check_empty_containers (#63492) 2021-08-25 09:05:08 -07:00
_linalg_utils.py
_lobpcg.py
_lowrank.py
_namedtensor_internals.py
_ops.py
_python_dispatcher.py
_six.py
_sources.py
_storage_docs.py
_tensor.py Use stacklevel for floordiv deprecation warnings (#64034) 2021-08-31 11:27:56 -07:00
_tensor_docs.py
_tensor_str.py
_torch_docs.py document that torch.triangular_solve has optional out= parameter (#63253) 2021-08-26 17:28:17 -07:00
_utils.py
_utils_internal.py
_VF.py
_vmap_internals.py
abi-check.cpp
autocast_mode.py bf16 Error message cleanup as well as addition of is_bf16_supported (#63798) 2021-08-25 09:59:59 -07:00
CMakeLists.txt [torch/deploy] add torch.distributed to build (#63918) 2021-08-26 20:58:44 -07:00
custom_class.h
custom_class_detail.h
deploy.h
extension.h
functional.py
hub.py Fix list() and help() torchhub functions for Windows (#63773) 2021-09-02 04:34:31 -07:00
library.h
overrides.py ENH Adds label_smoothing to cross entropy loss (#63122) 2021-08-29 23:33:04 -07:00
py.typed
quasirandom.py
random.py Adds return type annotation for fork_rng function (#63724) 2021-08-27 09:03:40 -07:00
README.txt
script.h
serialization.py
storage.py
torch_version.py
types.py

Note [TH abstraction violation]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

TH/THC provide some hpp headers, which are proper C++ headers rather than
C headers.  These headers serve double duty as *internal implementation
detail* headers, whose contents should largely not be used by external
clients.

Ideally, we would not install these headers at all; instead, you should
use public functions (in headers like `THTensor.h`, NOT `THTensor.hpp`)
to manipulate these structs.  However, there are a few places
in torch/csrc where we violate this abstraction.  They are marked with
a pointer to this note.  Each of those sites will have to be refactored
when we refactor the guts of THTensor and related structures.