pytorch/torch
2022-06-14 17:54:30 +00:00
..
_C Autogen Tags enum, and allow specifying tags while defining an op 2022-06-11 00:29:32 +00:00
_C_flatbuffer
_decomp Revert "Revert "Revert "Added {logical_not, trace} refs, moved logical ops to use method overloads""" 2022-06-10 04:40:43 +00:00
_lazy
_masked masked logasumexp/logaddexp 2022-06-11 05:46:36 +00:00
_prims Revert "Fixes maybe_broadcast to actually broadcast only when needed (#79298)" 2022-06-11 23:36:18 +00:00
_refs Revert "[primTorch] refs: lerp (#78473)" 2022-06-14 13:06:16 +00:00
_subclasses Add Dynamic Output Shape Tagdfor ata-dependent ops, handle in FakeTensor 2022-06-09 22:16:16 +00:00
amp remove spurious warning in amp (#79203) 2022-06-10 21:53:58 +00:00
ao [ao] Added fx model report per_channel detector 2022-06-10 08:09:59 +00:00
autograd
backends Deprecate torch.lu 2022-06-07 22:50:14 +00:00
contrib
cpu
csrc Add full support for serialization of MPS Tensors (#79465) 2022-06-14 17:54:30 +00:00
cuda
distributed [PT-D] Use process group of the partial tensor so sub pg comm will be enabled during reshard 2022-06-14 17:44:51 +00:00
distributions add type annotation to distributions.kl_divergence (#78432) 2022-06-10 13:39:20 +00:00
fft
futures
fx
jit
legacy
lib turn on -Werror=unused-variable in our Bazel CPU build 2022-06-11 02:46:34 +00:00
linalg Simplify and optimize linalg.solve 2022-06-11 04:06:40 +00:00
monitor
multiprocessing
nested
nn fix GRU document string (#79380) 2022-06-13 14:03:33 +00:00
onnx [ONNX] Clean up __init__ in torch.onnx (#78446) 2022-06-14 04:35:06 +00:00
optim [CUDA graphs] Allows Adam and AdamW to be capture-safe (#77862) 2022-06-13 01:56:47 +00:00
package torch/package: add fix for implicit numpy dependency (#78979) 2022-06-08 17:07:00 +00:00
profiler Revert "Revert "[Profiler] Move python tracing to unified event type (Part 2)"" 2022-06-09 19:45:02 +00:00
quantization
sparse
special
testing Revert "[primTorch] refs: lerp (#78473)" 2022-06-14 13:06:16 +00:00
utils [WIP] Fix non-reentrant hooks based checkpointing 2022-06-14 01:13:33 +00:00
__config__.py
__future__.py
__init__.py Updates TF32 docs (#79401) 2022-06-13 21:02:00 +00:00
_appdirs.py
_classes.py
_deploy.py
_jit_internal.py
_linalg_utils.py
_lobpcg.py
_lowrank.py
_meta_registrations.py [meta] nansum, nanmedian (and few minor clean-ups) (#79411) 2022-06-14 16:21:13 +00:00
_namedtensor_internals.py
_ops.py Autogen Tags enum, and allow specifying tags while defining an op 2022-06-11 00:29:32 +00:00
_python_dispatcher.py
_six.py
_sources.py
_storage_docs.py
_tensor.py Add full support for serialization of MPS Tensors (#79465) 2022-06-14 17:54:30 +00:00
_tensor_docs.py Move Tensor.grad back into C++ 2022-06-10 13:44:45 +00:00
_tensor_str.py
_torch_docs.py MAINT: Harmonize argsort params with array_api (#75162) 2022-06-09 12:32:01 +00:00
_utils.py hook XPU device in _get_available_device_type (#76167) 2022-06-14 04:34:21 +00:00
_utils_internal.py
_VF.py
_vmap_internals.py
abi-check.cpp
CMakeLists.txt Make Wunused-local-typedef a hard error (#77918) 2022-06-09 18:14:01 +00:00
custom_class.h
custom_class_detail.h
deploy.h
extension.h
functional.py Deprecate torch.lu 2022-06-07 22:50:14 +00:00
hub.py
library.h Autogen Tags enum, and allow specifying tags while defining an op 2022-06-11 00:29:32 +00:00
library.py Make torch.library decorators return function 2022-06-08 01:57:00 +00:00
overrides.py Python Bindings for SymInts (#78135) 2022-06-14 02:17:59 +00:00
py.typed
quasirandom.py
random.py
README.txt
return_types.py Simplify and optimize linalg.solve 2022-06-11 04:06:40 +00:00
script.h
serialization.py Add full support for serialization of MPS Tensors (#79465) 2022-06-14 17:54:30 +00:00
storage.py Add full support for serialization of MPS Tensors (#79465) 2022-06-14 17:54:30 +00:00
torch_version.py Move Tensor.grad back into C++ 2022-06-10 13:44:45 +00:00
types.py

Note [TH abstraction violation]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

TH/THC provide some hpp headers, which are proper C++ headers rather than
C headers.  These headers serve double duty as *internal implementation
detail* headers, whose contents should largely not be used by external
clients.

Ideally, we would not install these headers at all; instead, you should
use public functions (in headers like `THTensor.h`, NOT `THTensor.hpp`)
to manipulate these structs.  However, there are a few places
in torch/csrc where we violate this abstraction.  They are marked with
a pointer to this note.  Each of those sites will have to be refactored
when we refactor the guts of THTensor and related structures.