pytorch/torch
2022-07-21 20:15:27 +00:00
..
_C [Profiler] Add pattern to detect if TF32 is available but not used (#81273) 2022-07-20 18:40:21 +00:00
_C_flatbuffer
_decomp Revert "Revert "Refactored prim utils into _prims_utils folder (#81746) 2022-07-20 23:43:57 +00:00
_lazy [LT] Add a new backend interface [DUP of the original] (#81662) 2022-07-19 01:15:22 +00:00
_masked [composite compliance] fix masked_ops item call (#81475) 2022-07-15 03:21:56 +00:00
_prims Revert "Revert "Refactored prim utils into _prims_utils folder (#81746) 2022-07-20 23:43:57 +00:00
_prims_common Revert "Revert "Refactored prim utils into _prims_utils folder (#81746) 2022-07-20 23:43:57 +00:00
_refs Revert "Revert "Refactored prim utils into _prims_utils folder (#81746) 2022-07-20 23:43:57 +00:00
_subclasses normalize cuda device (#81739) 2022-07-21 19:48:56 +00:00
amp
ao [ao][sparsity] Training-aware data sparsity callback for lightning (#80371) 2022-07-21 16:41:43 +00:00
autograd Improve autograd custom function docs (#81340) 2022-07-21 19:54:30 +00:00
backends
contrib
cpu
csrc Change get module info to NOT parse flatbuffer as module. (#81819) 2022-07-21 20:15:27 +00:00
cuda Revert "[CUDA graphs] Clear autocast amp cache (#81558)" 2022-07-21 12:46:36 +00:00
distributed Adding fsdp fp16 and bf16 hooks (#81711) 2022-07-19 23:54:51 +00:00
distributions torch.distribution examples rendering issue (#81611) 2022-07-19 01:06:37 +00:00
fft
futures
fx bmm and contiguous constraints (#81527) 2022-07-21 19:21:50 +00:00
jit [JIT] Tweak annotation extraction for py3.10 (#81334) 2022-07-12 17:35:30 +00:00
legacy
lib Make language std configurable. (#75519) 2022-07-13 14:21:27 +00:00
linalg [Array API] Add linalg.vecdot (#70542) 2022-07-12 14:28:54 +00:00
monitor
multiprocessing Revert "FIX make sure we import the correct object from multiprocessing (#53282)" 2022-07-20 20:28:39 +00:00
nested
nn Simplify torch.nn.grad by calling into aten::convolution_backward (#81839) 2022-07-21 19:34:27 +00:00
onnx [ONNX] Quantization support for quantized::cat (#79826) 2022-07-12 15:44:23 +00:00
optim
package Use standard mechanism for stdlib names (#81520) 2022-07-15 23:11:23 +00:00
profiler [Profiler] Add pattern to detect if TF32 is available but not used (#81273) 2022-07-20 18:40:21 +00:00
quantization
sparse
special
testing Added generator check for parametrize and ops (#81263) 2022-07-21 19:32:54 +00:00
utils [DataPipe] Fixes various warnings, exceptions, and clean up testing (#81833) 2022-07-21 18:59:40 +00:00
__config__.py
__future__.py
__init__.py
_appdirs.py
_classes.py
_deploy.py
_jit_internal.py [JIT] Fix annotation extraction for named tuple (#81506) 2022-07-15 02:47:15 +00:00
_linalg_utils.py
_lobpcg.py
_lowrank.py
_meta_registrations.py Revert "Revert "Refactored prim utils into _prims_utils folder (#81746) 2022-07-20 23:43:57 +00:00
_namedtensor_internals.py
_ops.py
_python_dispatcher.py
_six.py
_sources.py
_storage_docs.py
_tensor.py Revert "[fix] allow saving python attr on Tensor and Parameter via torch.save (#81616)" 2022-07-21 10:46:24 +00:00
_tensor_docs.py
_tensor_str.py Make functional tensors printable (#81454) 2022-07-14 06:24:54 +00:00
_torch_docs.py [Array API] Add linalg.vecdot (#70542) 2022-07-12 14:28:54 +00:00
_utils.py Revert "[fix] allow saving python attr on Tensor and Parameter via torch.save (#81616)" 2022-07-21 10:46:24 +00:00
_utils_internal.py
_VF.py
_vmap_internals.py
abi-check.cpp
CMakeLists.txt Back out "Revert D37720837: Back out "Revert D37228314: [Profiler] Include ActivityType from Kineto"" (#81450) 2022-07-15 18:25:40 +00:00
custom_class.h
custom_class_detail.h
deploy.h
extension.h
functional.py
hub.py
library.h
library.py
overrides.py Return mode object from __enter__ (#80998) 2022-07-12 23:22:26 +00:00
py.typed
quasirandom.py
random.py
README.txt
return_types.py
script.h
serialization.py
storage.py Remove remaining eval calls from torch/storage.py (#81701) 2022-07-19 20:04:41 +00:00
torch_version.py
types.py

Note [TH abstraction violation]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

TH/THC provide some hpp headers, which are proper C++ headers rather than
C headers.  These headers serve double duty as *internal implementation
detail* headers, whose contents should largely not be used by external
clients.

Ideally, we would not install these headers at all; instead, you should
use public functions (in headers like `THTensor.h`, NOT `THTensor.hpp`)
to manipulate these structs.  However, there are a few places
in torch/csrc where we violate this abstraction.  They are marked with
a pointer to this note.  Each of those sites will have to be refactored
when we refactor the guts of THTensor and related structures.