pytorch/torch
2024-10-02 18:39:21 +00:00
..
_awaits
_C raw_alloc ignores PYTORCH_NO_CUDA_MEMORY_CACHING (#131114) 2024-10-02 16:27:15 +00:00
_C_flatbuffer
_custom_op
_decomp Preserve custom ops via run_decomps (#136882) 2024-10-01 17:38:00 +00:00
_dispatch
_dynamo remove capture_autograd_function flag (#136959) 2024-10-02 16:59:19 +00:00
_export [ts_converter] Fix prim::If buffer names (#136648) 2024-10-02 00:07:47 +00:00
_functorch dont let partitioner think it can fuse pointwise ops into user triton kernels (#136878) 2024-10-02 13:52:44 +00:00
_higher_order_ops Add type annotations for higher order ops/flex_attention (#137065) 2024-10-02 04:39:25 +00:00
_inductor [Inductor] External callable registration API for Matmul tuning candidates (#130774) 2024-10-02 15:38:10 +00:00
_lazy
_library Improve data-dependent-output meta kernel error message (#136671) 2024-09-26 03:46:04 +00:00
_logging Don't actually import module when checking if its valid (#136548) 2024-09-25 20:47:32 +00:00
_numpy
_prims Fix AOT Graph capture not propagating non_blocking copy parameter to … (#136513) 2024-10-01 00:32:47 +00:00
_prims_common Fix six broken tests in test_ops.py (#136653) 2024-09-30 20:32:55 +00:00
_refs Add decomposition for squeeze_copy (#130941) 2024-10-01 10:23:22 +00:00
_strobelight [Pytorch] Cleanup Strobelight URL and shorten for readability (#136102) 2024-09-16 18:10:33 +00:00
_subclasses Remove allow-untyped-defs from torch.fx.experimental.symbolic_shapes (#137019) 2024-10-01 13:22:10 +00:00
_vendor
amp
ao Add missing mappings to support torch.uint16 in quantization and export (#136547) 2024-10-01 00:01:01 +00:00
autograd Param fixes in docstring (#136097) 2024-09-21 18:56:34 +00:00
backends [sparse][semi-structured] Add float8 dtype support to 24 sparsity (#136397) 2024-09-27 21:37:34 +00:00
compiler
contrib
cpu
csrc [BE][clang-format] make macro PyObject_HEAD have its own line (#136945) 2024-10-02 18:39:21 +00:00
cuda raw_alloc ignores PYTORCH_NO_CUDA_MEMORY_CACHING (#131114) 2024-10-02 16:27:15 +00:00
distributed [dtensor][experimental] expose DTensor Context Parallel API (#137038) 2024-10-02 18:00:23 +00:00
distributions [BE]: Update mypy to 1.11.2 (#133816) 2024-09-16 19:44:11 +00:00
export Preserve custom ops via run_decomps (#136882) 2024-10-01 17:38:00 +00:00
fft
func
futures
fx Properly interpolate sloc here (#137088) 2024-10-01 18:33:03 +00:00
jit
legacy
lib
linalg docs: clarify alias usage for x parameter in vector_norm function (#136921) 2024-09-30 02:50:06 +00:00
masked [BE]: Update mypy to 1.11.2 (#133816) 2024-09-16 19:44:11 +00:00
monitor
mps
mtia [MTIA] Support torch.cuda.get_device_capability equivalent API on MTIA (#135889) 2024-09-17 17:42:56 +00:00
multiprocessing [torch/multiprocessing] Use multiprocessing.reduction.register ForkingPickler.register to register custom tensor and storage reductions (#135030) 2024-09-16 20:07:29 +00:00
nested Bias gradient calculation for NJT linear backward (#136660) 2024-09-26 21:38:10 +00:00
nn [FlexAttention] Remove restriction on QK headdim > V headdim (#135884) 2024-10-01 21:17:54 +00:00
onnx Remove allow-untyped-defs from torch.fx.experimental.symbolic_shapes (#137019) 2024-10-01 13:22:10 +00:00
optim Add missing input "eps" to adam docs (#135191) 2024-09-25 20:17:23 +00:00
package [3.13] fix 3.13 pickle error in torch/package (#136049) 2024-09-14 14:28:09 +00:00
profiler [Profiler] Torch Profiler distributed info is not JSON serializable (#135548) 2024-09-13 02:22:33 +00:00
quantization
signal
sparse [sparse][semi-structured] Add float8 dtype support to 24 sparsity (#136397) 2024-09-27 21:37:34 +00:00
special
testing Ensure noncontiguous tensor creation tests offsetting (#136396) 2024-10-02 00:40:43 +00:00
utils raw_alloc ignores PYTORCH_NO_CUDA_MEMORY_CACHING (#131114) 2024-10-02 16:27:15 +00:00
xpu Use torch.Stream&torch.Event for Dynamo capature (#134850) 2024-10-02 14:15:33 +00:00
__config__.py
__future__.py
__init__.py Remove allow-untyped-defs from torch.fx.experimental.symbolic_shapes (#137019) 2024-10-01 13:22:10 +00:00
_appdirs.py
_classes.py
_compile.py
_custom_ops.py
_deploy.py
_environment.py Improve is_fbcode functionality (#136871) 2024-09-27 21:19:01 +00:00
_guards.py Turn on type-checking in torch.fx.experimental.symbolic_shapes (#136972) 2024-10-01 13:22:10 +00:00
_jit_internal.py
_linalg_utils.py
_lobpcg.py
_lowrank.py
_meta_registrations.py Add decomps for max_unpool (#133146) 2024-09-20 21:35:25 +00:00
_namedtensor_internals.py
_ops.py Add type annotations for higher order ops/flex_attention (#137065) 2024-10-02 04:39:25 +00:00
_python_dispatcher.py
_size_docs.py
_sources.py
_storage_docs.py
_streambase.py Use torch.Stream&torch.Event for Dynamo capature (#134850) 2024-10-02 14:15:33 +00:00
_tensor.py Revert "Add deterministic path for CUDA cumsum (#136224)" 2024-09-27 12:54:47 +00:00
_tensor_docs.py Revert "Add deterministic path for CUDA cumsum (#136224)" 2024-09-27 12:54:47 +00:00
_tensor_str.py
_torch_docs.py Revert "Add deterministic path for CUDA cumsum (#136224)" 2024-09-27 12:54:47 +00:00
_utils.py
_utils_internal.py Revert "[Pytorch] Consolidate Strobelight compile time profiler between OSS and fbcode (#135953)" 2024-09-15 05:32:38 +00:00
_VF.py
_vmap_internals.py
_weights_only_unpickler.py
abi-check.cpp
CMakeLists.txt
custom_class.h
custom_class_detail.h
extension.h
functional.py Revert "Add deterministic path for CUDA cumsum (#136224)" 2024-09-27 12:54:47 +00:00
hub.py torch.hub: add get_dir/set_dir type hints (#134906) 2024-09-12 03:53:29 +00:00
library.h
library.py noop on torch.library APIs under torch::deploy (multipy) (#136645) 2024-09-26 02:34:34 +00:00
overrides.py Revert "Add deterministic path for CUDA cumsum (#136224)" 2024-09-27 12:54:47 +00:00
py.typed
quasirandom.py
random.py
README.txt
return_types.py
script.h
serialization.py [3.13] fix 3.13 pickle error in serialization.py (#136034) 2024-09-14 00:02:40 +00:00
storage.py
torch_version.py
types.py
version.py.tpl

Note [TH abstraction violation]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

TH/THC provide some hpp headers, which are proper C++ headers rather than
C headers.  These headers serve double duty as *internal implementation
detail* headers, whose contents should largely not be used by external
clients.

Ideally, we would not install these headers at all; instead, you should
use public functions (in headers like `THTensor.h`, NOT `THTensor.hpp`)
to manipulate these structs.  However, there are a few places
in torch/csrc where we violate this abstraction.  They are marked with
a pointer to this note.  Each of those sites will have to be refactored
when we refactor the guts of THTensor and related structures.