pytorch/torch
2024-12-02 21:29:02 +00:00
..
_awaits
_C [MTIA] Support torch.mtia.empty_cache() (#141533) 2024-11-28 02:24:19 +00:00
_C_flatbuffer
_custom_op
_decomp [BE] Make maybe_aliasing_or_mutating proper tag (#131990) 2024-11-24 00:12:49 +00:00
_dispatch
_dynamo Revert "[BE]: Update mypy to 1.13.0 (#140808)" 2024-12-02 20:47:43 +00:00
_export Revert "[BE]: Update mypy to 1.13.0 (#140808)" 2024-12-02 20:47:43 +00:00
_functorch Revert "[REFACTOR] Inline FxGraphCache.post_compile into sole call site (#141877)" 2024-12-02 21:26:13 +00:00
_higher_order_ops Revert "Ensure that BlockMask length must always exactly match the sequence length in flex_attention (#141625)" 2024-12-02 14:10:38 +00:00
_inductor Revert "[inductor][pattern matcher] revise mkldnn pattern matcher UT (#141334)" 2024-12-02 21:29:02 +00:00
_lazy
_library [BE]: Apply PERF401 autofixes from ruff (#140980) 2024-11-20 17:52:07 +00:00
_logging [BE]: Apply PERF401 autofixes from ruff (#140980) 2024-11-20 17:52:07 +00:00
_numpy
_prims [BE]: Apply PERF401 autofixes from ruff (#140980) 2024-11-20 17:52:07 +00:00
_prims_common Revert "[BE]: Update mypy to 1.13.0 (#140808)" 2024-12-02 20:47:43 +00:00
_refs Revert "[BE]: Update mypy to 1.13.0 (#140808)" 2024-12-02 20:47:43 +00:00
_strobelight
_subclasses Switch to using Python nested int (#141166) 2024-12-02 19:17:30 +00:00
_vendor
accelerator
amp [MPS] Add support for bf16 autocast (#139390) 2024-11-20 19:52:28 +00:00
ao Revert "[BE]: Update mypy to 1.13.0 (#140808)" 2024-12-02 20:47:43 +00:00
autograd Missing space in torch.autograd.Function deprecation warning (#141562) 2024-11-27 01:31:26 +00:00
backends
compiler [dynamo] skip_guard_eval_unsafe stance for power users (#140251) 2024-11-21 06:28:58 +00:00
contrib
cpu [Inductor][CPP] Add oneDNN BRGEMM config for Half cpp gemm template (#136255) 2024-11-05 05:33:29 +00:00
csrc cpp_wrapper: Add support for MemoryFormat arguments (#141367) 2024-12-02 20:40:24 +00:00
cuda [ROCM] Support Multi-GPU offline tuning in TunableOp (#139673) 2024-11-26 19:07:41 +00:00
distributed Revert "[BE]: Update mypy to 1.13.0 (#140808)" 2024-12-02 20:47:43 +00:00
distributions [BE] Use torch.special.expm1 (#141518) 2024-11-26 01:47:11 +00:00
export improve typings in unflatten (#141817) 2024-11-30 22:12:15 +00:00
fft
func
futures
fx Revert "[BE]: Update mypy to 1.13.0 (#140808)" 2024-12-02 20:47:43 +00:00
jit Revert "[BE]: Update mypy to 1.13.0 (#140808)" 2024-12-02 20:47:43 +00:00
legacy
lib Add and use thread-safe strerror (#140472) 2024-11-19 04:24:17 +00:00
linalg
masked
monitor
mps
mtia [MTIA] Support torch.mtia.empty_cache() (#141533) 2024-11-28 02:24:19 +00:00
multiprocessing
nested Switch to using Python nested int (#141166) 2024-12-02 19:17:30 +00:00
nn Revert "[BE]: Update mypy to 1.13.0 (#140808)" 2024-12-02 20:47:43 +00:00
onnx [ONNX] Remove special handling of torchvision.ops imports in onnx export (#141569) 2024-11-28 18:05:40 +00:00
optim Revert "[BE]: Update mypy to 1.13.0 (#140808)" 2024-12-02 20:47:43 +00:00
package [BE]: Apply PERF401 autofixes from ruff (#140980) 2024-11-20 17:52:07 +00:00
profiler Add skip_first_wait to profiler.schedule (V2) (#141512) 2024-11-26 18:10:54 +00:00
quantization
signal
sparse [sparse] add extra options to _cslt_spare_mm (#137427) 2024-11-27 05:32:45 +00:00
special
testing Revert "[BE]: Update mypy to 1.13.0 (#140808)" 2024-12-02 20:47:43 +00:00
utils Revert "[dynamo][pytree][1/N] make CXX pytree traceable: tree_iter / tree_leaves (#137397)" 2024-12-02 16:05:14 +00:00
xpu [BE]: Apply PERF401 autofixes from ruff (#140980) 2024-11-20 17:52:07 +00:00
__config__.py
__future__.py
__init__.py [dynamo] add SymNode bitwise and/or (#138777) 2024-11-22 23:36:16 +00:00
_appdirs.py
_classes.py
_compile.py
_custom_ops.py
_deploy.py
_environment.py
_guards.py dynamo: guard on FSDP module parameters (#138819) 2024-11-13 20:46:46 +00:00
_jit_internal.py
_linalg_utils.py
_lobpcg.py
_lowrank.py
_meta_registrations.py [sparse] add extra options to _cslt_spare_mm (#137427) 2024-11-27 05:32:45 +00:00
_namedtensor_internals.py
_ops.py
_python_dispatcher.py
_size_docs.py
_sources.py
_storage_docs.py
_streambase.py
_tensor.py type annotations for meta_utils (#140203) 2024-11-13 20:07:47 +00:00
_tensor_docs.py
_tensor_str.py
_thread_safe_fork.py
_torch_docs.py Clarify torch.arange floating-point rounding behavior (#141655) 2024-11-27 09:31:39 +00:00
_utils.py [Device] Add mps as device type in torch._utils._get_available_device_type() (#141098) 2024-11-20 20:45:59 +00:00
_utils_internal.py Change export IR to non-functional pre-dispatch IR (#139511) 2024-11-20 21:47:55 +00:00
_VF.py
_vmap_internals.py
_weights_only_unpickler.py Add small test case for #140230 (#140850) 2024-11-19 02:44:54 +00:00
abi-check.cpp
CMakeLists.txt Add torch.version.xpu (#139466) 2024-11-09 13:31:21 +00:00
custom_class.h
custom_class_detail.h
extension.h
functional.py
hub.py
library.h
library.py no-op torch.library.custom_op APIs on torch.deploy (#139509) 2024-11-04 18:01:08 +00:00
overrides.py
py.typed
quasirandom.py
random.py
README.txt
return_types.py
script.h
serialization.py Allow NJT by default for weights_only torch.load (take 2) (#140739) 2024-11-19 02:44:53 +00:00
storage.py
torch_version.py
types.py
version.py.tpl

Note [TH abstraction violation]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

TH/THC provide some hpp headers, which are proper C++ headers rather than
C headers.  These headers serve double duty as *internal implementation
detail* headers, whose contents should largely not be used by external
clients.

Ideally, we would not install these headers at all; instead, you should
use public functions (in headers like `THTensor.h`, NOT `THTensor.hpp`)
to manipulate these structs.  However, there are a few places
in torch/csrc where we violate this abstraction.  They are marked with
a pointer to this note.  Each of those sites will have to be refactored
when we refactor the guts of THTensor and related structures.