pytorch/torch
2023-06-26 18:17:44 +00:00
..
_awaits
_C [PyPer][ET] Refactor EG to ET (#99694) 2023-06-22 19:41:54 +00:00
_C_flatbuffer
_custom_op
_decomp [decomp] Add decomposition for torch.renorm (#103858) 2023-06-21 20:57:43 +00:00
_dispatch Reland of https://github.com/pytorch/pytorch/pull/101818 (#103888) 2023-06-21 21:00:56 +00:00
_dynamo [dynamo] FSDP + AC + torch.compile (#103953) 2023-06-24 01:40:56 +00:00
_export [export] Deserialize subgraphs. (#103991) 2023-06-26 18:17:44 +00:00
_functorch [dynamo] FSDP + AC + torch.compile (#103953) 2023-06-24 01:40:56 +00:00
_higher_order_ops Stop Dynamo from peeking into wrap's body (#104076) 2023-06-26 17:16:51 +00:00
_inductor Fold Conv-Bn (#100653) 2023-06-26 16:04:34 +00:00
_lazy
_logging Add graph break logging option instead of config flag (#103202) 2023-06-12 19:52:31 +00:00
_prims Replace _prims_common.check with torch._check* (#103240) 2023-06-21 00:46:17 +00:00
_prims_common [decomp] Add decomposition for torch.renorm (#103858) 2023-06-21 20:57:43 +00:00
_refs [decomp] Add decomposition for torch.renorm (#103858) 2023-06-21 20:57:43 +00:00
_subclasses [pt2] grad support (#102264) 2023-06-21 10:13:09 +00:00
amp [AMP] Support XLA:TPU (#96370) 2023-06-23 19:46:42 +00:00
ao [ET][XNNPACK] Add support for quantized Sub (#104090) 2023-06-26 16:32:15 +00:00
autograd
backends [BE] Deprecate has_XYZ attributes (#103279) 2023-06-10 05:17:17 +00:00
compiler torch.compiler public namespace (#102182) 2023-06-13 19:52:17 +00:00
contrib
cpu Quantization oneDNN backend only support VNNI CPU (#103653) 2023-06-19 09:50:07 +00:00
csrc [BE] Enforce missing override keyword (#104032) 2023-06-24 02:34:24 +00:00
cuda [pt2] grad support (#102264) 2023-06-21 10:13:09 +00:00
distributed [C10D] Add functional collective reduce_scatter_into_tensor_coalesced. (#101023) 2023-06-23 19:24:11 +00:00
distributions Fix Dirichlet.log_prob() when x=0 and alpha=1 (#103605) 2023-06-15 16:16:50 +00:00
fft
func [pt2] grad support (#102264) 2023-06-21 10:13:09 +00:00
futures
fx Preserve all submodules/parameters/buffers when unpickle graph module (#104115) 2023-06-26 06:59:48 +00:00
jit Fix shape function for transpose convolution (#102139) 2023-06-21 17:50:56 +00:00
legacy
lib Use size_t in THManagedMapAllocator (#103331) 2023-06-13 04:50:30 +00:00
linalg
masked Fix autograd issue with identity conversions (#92022) 2023-06-21 21:23:03 +00:00
monitor
mps [BE] Deprecate has_XYZ attributes (#103279) 2023-06-10 05:17:17 +00:00
multiprocessing
nested
nn add custom device support for special nn.modules (#103419) 2023-06-26 00:58:29 +00:00
onnx Extend torch->onnx export for quantized convolutional ops (#102759) 2023-06-23 22:50:17 +00:00
optim Fix lr_scheduler serialization contains bound methods issue (#102627) 2023-06-23 03:53:15 +00:00
package
profiler [PyPer][ET] Refactor EG to ET (#99694) 2023-06-22 19:41:54 +00:00
quantization
signal
sparse softmax: Triton kernel for BSR inputs (#102095) 2023-06-21 01:23:27 +00:00
special
testing Add partial derivative unit tests (#103809) 2023-06-25 00:36:10 +00:00
utils Fixed benchmark_utils.Fuzzer (#101553) 2023-06-26 08:03:27 +00:00
__config__.py
__future__.py
__init__.py Make torch.empty* deterministic by filling with NaN or max int value (#101849) 2023-06-21 02:53:22 +00:00
_appdirs.py
_classes.py
_deploy.py
_guards.py Lift user defined attributes into inputs for certain cases (user defined types and tensors) (#103386) 2023-06-20 23:45:19 +00:00
_jit_internal.py default should be used as default value in boolean_dispatch (#103463) 2023-06-14 03:16:31 +00:00
_linalg_utils.py
_lobpcg.py
_lowrank.py
_meta_registrations.py [RFC]: Functionalize assertions (#103757) 2023-06-24 00:23:35 +00:00
_namedtensor_internals.py
_ops.py Change HigherOrderOperator default namespace from global to 'higher_order' (#103870) 2023-06-20 19:10:55 +00:00
_python_dispatcher.py
_sources.py
_storage_docs.py
_tensor.py
_tensor_docs.py Added is_xla (#103100) 2023-06-22 23:31:04 +00:00
_tensor_str.py
_torch_docs.py Make torch.empty* deterministic by filling with NaN or max int value (#101849) 2023-06-21 02:53:22 +00:00
_utils.py fix hpu storage serialization (#101680) 2023-06-21 21:19:49 +00:00
_utils_internal.py
_VF.py
_vmap_internals.py
_weights_only_unpickler.py
abi-check.cpp
CMakeLists.txt enable more ASAN tests (#101483) 2023-06-15 05:21:15 +00:00
custom_class.h
custom_class_detail.h
extension.h
functional.py
hub.py
library.h
library.py
overrides.py [AMP] Support XLA:TPU (#96370) 2023-06-23 19:46:42 +00:00
py.typed
quasirandom.py
random.py
README.txt
return_types.py
script.h
serialization.py fix hpu storage serialization (#101680) 2023-06-21 21:19:49 +00:00
storage.py fix hpu storage serialization (#101680) 2023-06-21 21:19:49 +00:00
torch_version.py
types.py
version.py.tpl

Note [TH abstraction violation]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

TH/THC provide some hpp headers, which are proper C++ headers rather than
C headers.  These headers serve double duty as *internal implementation
detail* headers, whose contents should largely not be used by external
clients.

Ideally, we would not install these headers at all; instead, you should
use public functions (in headers like `THTensor.h`, NOT `THTensor.hpp`)
to manipulate these structs.  However, there are a few places
in torch/csrc where we violate this abstraction.  They are marked with
a pointer to this note.  Each of those sites will have to be refactored
when we refactor the guts of THTensor and related structures.