pytorch/torch
2024-07-27 18:02:35 +00:00
..
_awaits
_C Revert "Add wrappers for synchronous GPUDirect Storage APIs (#130633)" 2024-07-26 18:08:20 +00:00
_C_flatbuffer
_custom_op
_decomp [BE] typing for decorators - jit/_decompositions (#131566) 2024-07-24 20:28:28 +00:00
_dispatch
_dynamo Remove mypy ignore from torch/_dynamo/variables/__init__.py (#131784) 2024-07-27 05:07:33 +00:00
_export [export] fix set_grad x tensor constant. (#131787) 2024-07-26 16:41:59 +00:00
_functorch [AOT Autograd] Donated Buffer (#130580) 2024-07-26 17:14:34 +00:00
_higher_order_ops Support meta tensors as inputs to the triton_kernel_wrapper HOPs (#131896) 2024-07-26 21:41:03 +00:00
_inductor [micro_pipeline_tp] exclude simple overlappable collectives as micro-pipeline TP candidates when reorder_for_compute_comm_overlap is enabled (#131410) 2024-07-27 11:07:43 +00:00
_lazy
_library [BE] typing for decorators - _library/custom_ops (#131578) 2024-07-25 22:24:19 +00:00
_logging [pt2] Increase dynamo/inductor default log level to info (#131311) 2024-07-22 17:33:29 +00:00
_numpy Make hashing a SymInt raise an error again (#130548) 2024-07-16 18:30:30 +00:00
_prims Fix out_wrapper, _make_copy_from_view to handle all signatures (#130937) 2024-07-21 20:39:24 +00:00
_prims_common [BE] typing for decorators - _prims_common/wrappers (#131567) 2024-07-25 14:35:13 +00:00
_refs [BE] typing for decorators - _refs/nn/functional (#131581) 2024-07-26 05:00:03 +00:00
_strobelight
_subclasses fast-path FakeTensor detach (#131899) 2024-07-26 20:16:08 +00:00
_vendor
amp
ao Fix public API tests (#131386) 2024-07-26 23:38:43 +00:00
autograd [Profiler] exclude gpu_user_annotation when accumulating cuda time total (#130733) 2024-07-22 04:35:21 +00:00
backends [BE] typing for decorators - _jit_internal (#131573) 2024-07-25 22:24:19 +00:00
compiler
contrib
cpu
csrc Add fallback() to torch.library (#131707) 2024-07-27 18:02:35 +00:00
cuda Revert "Add wrappers for synchronous GPUDirect Storage APIs (#130633)" 2024-07-26 18:08:20 +00:00
distributed Add out_dtypes to fused_all_gather_scaled_matmul's args (#131831) 2024-07-27 11:07:43 +00:00
distributions
export [pt] immutable accessors in graph signature (#131940) 2024-07-27 05:32:53 +00:00
fft
func
futures
fx carry cond in data-dependent error (#131932) 2024-07-27 02:13:04 +00:00
jit [BE] typing for decorators - _jit_internal (#131573) 2024-07-25 22:24:19 +00:00
legacy
lib
linalg
masked [BE] typing for decorators - masked/_ops (#131569) 2024-07-25 22:24:19 +00:00
monitor
mps
mtia Revert "MTIA equivalent of torch.cuda.memory_stats (#131673)" 2024-07-26 00:54:37 +00:00
multiprocessing
nested Revert "[NestedTensor] Integrate the layer normalization operator along the jagged dimension into NestedTensor (#131519)" 2024-07-27 14:45:47 +00:00
nn Fix public API tests (#131386) 2024-07-26 23:38:43 +00:00
onnx Fix public API tests (#131386) 2024-07-26 23:38:43 +00:00
optim Add __all__ to torch.optim to define public interface (#131959) 2024-07-27 01:03:25 +00:00
package
profiler
quantization
signal [BE] typing for decorators - signal/windows/windows (#131582) 2024-07-26 05:00:07 +00:00
sparse [BE] mypy: disallow untyped decorators (#131428) 2024-07-23 21:50:55 +00:00
special
testing [Traceable FSDP2][Inductor] Create grouped nodes for FSDP2 all-gather code block and reduce-scatter code block (after Buffer/Operation split) (#131510) 2024-07-27 08:39:58 +00:00
utils [FlopCounterMode] Fix register_flop_formula (#131777) 2024-07-26 18:44:50 +00:00
xpu
__config__.py
__future__.py
__init__.py Revert "Add wrappers for synchronous GPUDirect Storage APIs (#130633)" 2024-07-26 18:08:20 +00:00
_appdirs.py
_classes.py
_compile.py
_custom_ops.py
_deploy.py
_guards.py
_jit_internal.py [BE] typing for decorators - _jit_internal (#131573) 2024-07-25 22:24:19 +00:00
_linalg_utils.py
_lobpcg.py
_lowrank.py
_meta_registrations.py Fix meta error in _convert_weight_to_int4pack (#130915) 2024-07-26 08:36:30 +00:00
_namedtensor_internals.py
_ops.py _get_operation_overload: dont raise exception when overload does not exist (#131554) 2024-07-26 15:38:11 +00:00
_python_dispatcher.py
_size_docs.py
_sources.py
_storage_docs.py
_streambase.py
_tensor.py
_tensor_docs.py [MTIA] Support module.mtia() (#131499) 2024-07-25 04:23:48 +00:00
_tensor_str.py fix tensor print behavior for XPU (#130523) 2024-07-17 02:03:32 +00:00
_torch_docs.py
_utils.py
_utils_internal.py Write trace_structured events to scuba (#130955) 2024-07-19 06:02:47 +00:00
_VF.py
_vmap_internals.py
_weights_only_unpickler.py Blocklist certain modules for weights_only load (#131259) 2024-07-22 18:23:21 +00:00
abi-check.cpp
CMakeLists.txt Revert "Add wrappers for synchronous GPUDirect Storage APIs (#130633)" 2024-07-26 18:08:20 +00:00
custom_class.h
custom_class_detail.h
extension.h
functional.py
hub.py
library.h [3/N] Fix Wunused-parameter warnings (#131271) 2024-07-20 23:31:03 +00:00
library.py Add fallback() to torch.library (#131707) 2024-07-27 18:02:35 +00:00
overrides.py [MTIA] Support module.mtia() (#131499) 2024-07-25 04:23:48 +00:00
py.typed
quasirandom.py
random.py
README.txt
return_types.py
script.h
serialization.py Updating Types in torch/_dynamo/utils.py (#131001) 2024-07-23 18:25:52 +00:00
storage.py Fix public API tests (#131386) 2024-07-26 23:38:43 +00:00
torch_version.py Add mypy typing to torch_version.py (#131447) 2024-07-23 17:31:07 +00:00
types.py FakeTensor cache SymInt support (#127596) 2024-07-21 19:26:38 +00:00
version.py.tpl

Note [TH abstraction violation]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

TH/THC provide some hpp headers, which are proper C++ headers rather than
C headers.  These headers serve double duty as *internal implementation
detail* headers, whose contents should largely not be used by external
clients.

Ideally, we would not install these headers at all; instead, you should
use public functions (in headers like `THTensor.h`, NOT `THTensor.hpp`)
to manipulate these structs.  However, there are a few places
in torch/csrc where we violate this abstraction.  They are marked with
a pointer to this note.  Each of those sites will have to be refactored
when we refactor the guts of THTensor and related structures.