pytorch/torch
2023-12-20 15:23:44 +00:00
..
_awaits
_C Serve multistream graph captures from correct pool (#114647) 2023-12-18 18:24:15 +00:00
_C_flatbuffer
_custom_op Allow functionalization to work with optional mutable (#114803) 2023-11-30 23:48:03 +00:00
_decomp [inductor] Updated upsample_bilinear2d decomposition (#104182) 2023-12-14 14:50:06 +00:00
_dispatch
_dynamo Ensure wrapping subclasses with as_subclass is supported (#116091) 2023-12-20 14:37:08 +00:00
_export [export][reland] non-strict export with dynamic shapes (#116048) 2023-12-19 23:57:22 +00:00
_functorch Support nn_module_stack in torch.export(strict=False) (#115454) 2023-12-20 01:43:39 +00:00
_higher_order_ops Support nn_module_stack in torch.export(strict=False) (#115454) 2023-12-20 01:43:39 +00:00
_inductor Revert "[inductor] Avoid bool being upcast to int (#109913)" 2023-12-20 12:33:50 +00:00
_lazy
_library Refactor can_auto_functionalize (#115134) 2023-12-05 22:43:06 +00:00
_logging Add basic autograd TORCH_LOGS support (#115438) 2023-12-20 15:23:44 +00:00
_numpy [BE]: Enable a PLC0131, PLC0132, PLC0205. Fix PLC0132 bug. (#115015) 2023-12-02 20:35:10 +00:00
_prims
_prims_common [inductor] Allow sympy expressions to participate in type promotion (#115676) 2023-12-13 22:22:37 +00:00
_refs Add decomposition for torch.block_diag (#115096) 2023-12-11 20:04:22 +00:00
_subclasses Add support for multi device foreach ops (#116064) 2023-12-20 04:19:40 +00:00
_vendor
amp
ao [quant][fx] Lower operator.matmul in convert_fx (#113954) 2023-12-12 00:34:58 +00:00
autograd Add basic autograd TORCH_LOGS support (#115438) 2023-12-20 15:23:44 +00:00
backends [MPS] Add MacOS 14 runtime check (#115512) 2023-12-11 21:11:42 +00:00
compiler
contrib
cpu
csrc [re-land] Introduce 3 low-latency, intra-node allreduce algorithms for small messages to PyTorch (#114001) (#116125) 2023-12-20 07:13:50 +00:00
cuda [BE] Set torch.cuda.has_half to True (#115884) 2023-12-15 02:30:55 +00:00
distributed Fix ColwiseParallel typo (#116151) 2023-12-20 06:40:32 +00:00
distributions Fix hang in VonMises rejection sampling for small values of concentration (#114498) 2023-12-04 23:07:06 +00:00
export Support nn_module_stack in torch.export(strict=False) (#115454) 2023-12-20 01:43:39 +00:00
fft
func
futures
fx Support nn_module_stack in torch.export(strict=False) (#115454) 2023-12-20 01:43:39 +00:00
jit [BE][Easy]: Apply RUF019: remove duplicate checks for dict access (#114478) 2023-11-29 00:14:02 +00:00
legacy
lib
linalg
masked
monitor
mps
multiprocessing Robustify torch.multiprocessing.spawn error reporting to be less deadlock prone (#114688) 2023-12-09 03:36:43 +00:00
nested Fix jagged composite impl of flatten() (#115192) 2023-12-19 19:15:21 +00:00
nn [Doc] Add padding size constraint in nn.ReflectionPad2d (#115995) 2023-12-18 21:29:14 +00:00
onnx Store user model to simplify ONNXProgram.{adapt_torch_*,__call__} APIs (#115281) 2023-12-09 07:46:12 +00:00
optim Revert "Adamw refactor (#115983)" 2023-12-19 15:26:44 +00:00
package
profiler
quantization
signal
sparse [sparse][semi-structured] enable fp32 support, separate sparse and dense constraints (#115550) 2023-12-15 02:28:17 +00:00
special
testing add Half support for layer_norm on CPU (#99590) 2023-12-20 01:11:15 +00:00
utils Support Predispatch functionalization (#113728) 2023-12-19 20:28:35 +00:00
__config__.py
__future__.py
__init__.py Some tiny modification about torch.set/get_default_device (#116014) 2023-12-19 05:08:06 +00:00
_appdirs.py
_classes.py
_compile.py
_custom_ops.py
_deploy.py
_guards.py Add Stateful/Stateless symbolic contexts, use fresh fake mode for dynamo backends (#113926) (#114526) 2023-11-26 23:40:32 +00:00
_jit_internal.py
_linalg_utils.py
_lobpcg.py
_lowrank.py
_meta_registrations.py Fix backward for SDPA NT jagged layout (#115576) 2023-12-12 18:35:40 +00:00
_namedtensor_internals.py
_ops.py Support Predispatch functionalization (#113728) 2023-12-19 20:28:35 +00:00
_python_dispatcher.py
_sources.py
_storage_docs.py
_streambase.py
_tensor.py Make Float8 types serializeable (#114662) 2023-11-29 23:23:23 +00:00
_tensor_docs.py
_tensor_str.py
_torch_docs.py Updated docs for deprecated torch.set_default_tensor_type (#115041) 2023-12-07 16:17:36 +00:00
_utils.py Make Float8 types serializeable (#114662) 2023-11-29 23:23:23 +00:00
_utils_internal.py [inductor][Observability] Add log for Optimus to enable easier debug (#110452) 2023-12-01 18:25:56 +00:00
_VF.py
_vmap_internals.py
_weights_only_unpickler.py Make Float8 types serializeable (#114662) 2023-11-29 23:23:23 +00:00
abi-check.cpp
CMakeLists.txt Revert "[Reland2] Update NVTX to NVTX3 (#109843)" 2023-12-05 16:10:20 +00:00
custom_class.h [Reland] [1/N] Fixes clang-tidy warnings in header files (#114668) 2023-11-29 07:11:51 +00:00
custom_class_detail.h
extension.h
functional.py
hub.py
library.h
library.py Optimize inspect.stack() call in caffe2/torch/library.py (#114700) 2023-11-29 20:54:02 +00:00
overrides.py Some tiny modification about torch.set/get_default_device (#116014) 2023-12-19 05:08:06 +00:00
py.typed
quasirandom.py
random.py
README.txt
return_types.py [pytree] register pytree node type in both C++ pytree and Python pytree (#112111) 2023-11-28 11:41:38 +00:00
script.h
serialization.py
storage.py
torch_version.py
types.py
version.py.tpl

Note [TH abstraction violation]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

TH/THC provide some hpp headers, which are proper C++ headers rather than
C headers.  These headers serve double duty as *internal implementation
detail* headers, whose contents should largely not be used by external
clients.

Ideally, we would not install these headers at all; instead, you should
use public functions (in headers like `THTensor.h`, NOT `THTensor.hpp`)
to manipulate these structs.  However, there are a few places
in torch/csrc where we violate this abstraction.  They are marked with
a pointer to this note.  Each of those sites will have to be refactored
when we refactor the guts of THTensor and related structures.