pytorch/torch
2022-05-19 19:55:27 +00:00
..
_C [RELAND] Adds torch.cuda.is_current_stream_capturing (#77789) 2022-05-18 23:18:53 +00:00
_C_flatbuffer
_decomp Fixed type promotion semantics for native_batch_norm and native_layer_norm (#77407) 2022-05-19 17:11:47 +00:00
_lazy Revert "Revert "[LT] Codegen ReuseNode for supported ops"" 2022-05-16 20:14:42 +00:00
_masked masked median 2022-05-19 18:46:26 +00:00
_prims Fixed type promotion semantics for native_batch_norm and native_layer_norm (#77407) 2022-05-19 17:11:47 +00:00
_refs Fixed type promotion semantics for native_batch_norm and native_layer_norm (#77407) 2022-05-19 17:11:47 +00:00
amp Update amp document with CPU Training/Inference Examples (#77244) 2022-05-11 15:42:45 +00:00
ao quant doc: improve rendered documentation for backend_config_dict 2022-05-18 11:46:07 +00:00
autograd Add __all__ for torch.autograd.{anomaly_mode, gradcheck, forward_ad} 2022-05-10 17:36:47 +00:00
backends Add the Runtime components for MPS backend. (#76725) 2022-05-11 17:19:45 +00:00
contrib
cpu
csrc [test] attempt to functionalize ops with mutable positional-only args 2022-05-19 18:50:34 +00:00
cuda Merge torch.cuda._UntypedStorage into torch._UntypedStorage (#75459) 2022-05-19 13:54:39 +00:00
distributed Refactor operator dispatch framework across different Tensors. 2022-05-19 19:27:07 +00:00
distributions Add mode property to distributions. (#76690) 2022-05-11 18:26:56 +00:00
fft [complex32] fft support (cuda only) (#74857) 2022-05-12 04:28:55 +00:00
futures
fx Add torch dispatch mode to ProxyTensor tracing (#77174) 2022-05-19 19:53:57 +00:00
jit Adding a way to register both upper and lower bound functions 2022-05-18 17:34:07 +00:00
legacy
lib
linalg Update linalg.*norm 2022-05-18 11:46:50 +00:00
monitor
multiprocessing Merge torch.cuda._UntypedStorage into torch._UntypedStorage (#75459) 2022-05-19 13:54:39 +00:00
nested
nn MHA forward pass bug fix 2022-05-19 01:21:24 +00:00
onnx [ONNX] Refactor to remove inline imports - attempt 2 (#77448) 2022-05-16 14:44:24 +00:00
optim Adding maximize to Adamax (#77409) 2022-05-16 17:34:44 +00:00
package Merge torch.cuda._UntypedStorage into torch._UntypedStorage (#75459) 2022-05-19 13:54:39 +00:00
profiler
quantization [quant][fx][improvement] Renamed default_affine_fixed_qparams_observer and default_symmetric_fixed_qparams_observer (#76637) 2022-05-04 02:39:20 +00:00
sparse Compressed sparse layout conversion stubs (#77489) 2022-05-16 18:37:42 +00:00
special
testing masked median 2022-05-19 18:46:26 +00:00
utils [DataPipe] Refactor 'mux' to have buffer as an instance variable 2022-05-19 19:55:27 +00:00
__config__.py
__future__.py
__init__.py Merge torch.cuda._UntypedStorage into torch._UntypedStorage (#75459) 2022-05-19 13:54:39 +00:00
_appdirs.py
_classes.py
_deploy.py [lint] upgrade mypy to latest version 2022-05-03 20:51:34 +00:00
_jit_internal.py
_linalg_utils.py Remove deprecated torch.solve (#70986) 2022-05-10 13:44:07 +00:00
_lobpcg.py
_lowrank.py
_meta_registrations.py reflection_pad2d support 2022-05-19 14:43:35 +00:00
_namedtensor_internals.py
_ops.py Return all overloads for an operator in _jit_get_operation 2022-05-04 23:49:47 +00:00
_python_dispatcher.py Lint fix 2022-05-05 05:52:40 +00:00
_six.py
_sources.py
_storage_docs.py Merge torch.cuda._UntypedStorage into torch._UntypedStorage (#75459) 2022-05-19 13:54:39 +00:00
_tensor.py Revert "Revert "Implement sym_sizes to create proper IR for sym ints representing tensor sizes (#76836)"" 2022-05-18 18:40:57 +00:00
_tensor_docs.py Add to_sparse_bsr (#77366) 2022-05-13 20:16:03 +00:00
_tensor_str.py Support str for Sparse Compressed tensors 2022-05-18 12:58:54 +00:00
_torch_docs.py rocblas alt impl during backward pass only (#71881) 2022-05-18 19:42:58 +00:00
_utils.py Merge torch.cuda._UntypedStorage into torch._UntypedStorage (#75459) 2022-05-19 13:54:39 +00:00
_utils_internal.py
_VF.py
_vmap_internals.py
abi-check.cpp
CMakeLists.txt [Reland take-2] Add JIT graph fuser for oneDNN Graph API (v0.5) 2022-05-05 16:57:03 +00:00
custom_class.h
custom_class_detail.h
deploy.h
extension.h
functional.py Revert "stft: remove non-center overload and python functional wrapper" 2022-05-09 19:59:46 +00:00
hub.py Minor torchhub docs 2022-05-10 11:01:02 +00:00
library.h Back out Dispatcher change that makes Messenger Desktop crash on M1 devices (#77414) 2022-05-13 17:33:53 +00:00
library.py Add meta tensor support for some operations using python registration 2022-05-10 17:55:06 +00:00
overrides.py Revert "Revert "Implement sym_sizes to create proper IR for sym ints representing tensor sizes (#76836)"" 2022-05-18 18:40:57 +00:00
py.typed
quasirandom.py
random.py
README.txt
return_types.py
script.h
serialization.py Merge torch.cuda._UntypedStorage into torch._UntypedStorage (#75459) 2022-05-19 13:54:39 +00:00
storage.py Merge torch.cuda._UntypedStorage into torch._UntypedStorage (#75459) 2022-05-19 13:54:39 +00:00
torch_version.py
types.py

Note [TH abstraction violation]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

TH/THC provide some hpp headers, which are proper C++ headers rather than
C headers.  These headers serve double duty as *internal implementation
detail* headers, whose contents should largely not be used by external
clients.

Ideally, we would not install these headers at all; instead, you should
use public functions (in headers like `THTensor.h`, NOT `THTensor.hpp`)
to manipulate these structs.  However, there are a few places
in torch/csrc where we violate this abstraction.  They are marked with
a pointer to this note.  Each of those sites will have to be refactored
when we refactor the guts of THTensor and related structures.