pytorch/torch
Aaron Orenstein 35c8c31f11 Fix for failure in D68425364 (#145304)
Summary: Back out change from #145166 which causes an internal model to fail.

Differential Revision: D68459095

Pull Request resolved: https://github.com/pytorch/pytorch/pull/145304
Approved by: https://github.com/izaitsevfb
2025-01-22 23:33:02 +00:00
..
_awaits
_C [CUDA][cuBLAS] Add fp16 accumulate option to cuBLAS/cuBLASLt (#144441) 2025-01-22 22:42:48 +00:00
_C_flatbuffer
_custom_op PEP585 update - mostly toplevels (#145178) 2025-01-22 02:21:14 +00:00
_decomp PEP585 update - torch/_C torch/_decomp torch/_lazy torch/_library torch/_numpy torch/_prims torch/_refs torch/_strobelight (#145102) 2025-01-18 20:47:12 +00:00
_dispatch PEP585 update - mostly toplevels (#145178) 2025-01-22 02:21:14 +00:00
_dynamo [PyTorch] Add backend aot_eager_decomp_partition_with_mode (#143250) 2025-01-22 23:20:59 +00:00
_export PEP585 update - torch/_export (#145138) 2025-01-19 18:48:35 +00:00
_functorch [PyTorch] Add backend aot_eager_decomp_partition_with_mode (#143250) 2025-01-22 23:20:59 +00:00
_higher_order_ops PEP585 update - torch/_higher_order_ops torch/_subclasses torch/backends torch/compiler torch/cuda torch/masked torch/mtia torch/nested (#145202) 2025-01-20 22:37:26 +00:00
_inductor Reverting the PR adding Kleidiai-based int4 kernels (#145392) 2025-01-22 20:11:49 +00:00
_lazy PEP585 update - torch/_C torch/_decomp torch/_lazy torch/_library torch/_numpy torch/_prims torch/_refs torch/_strobelight (#145102) 2025-01-18 20:47:12 +00:00
_library PEP585 update - torch/_C torch/_decomp torch/_lazy torch/_library torch/_numpy torch/_prims torch/_refs torch/_strobelight (#145102) 2025-01-18 20:47:12 +00:00
_logging PEP585 update - mostly toplevels (#145178) 2025-01-22 02:21:14 +00:00
_numpy PEP585 update - torch/_C torch/_decomp torch/_lazy torch/_library torch/_numpy torch/_prims torch/_refs torch/_strobelight (#145102) 2025-01-18 20:47:12 +00:00
_prims PEP585 update - torch/_C torch/_decomp torch/_lazy torch/_library torch/_numpy torch/_prims torch/_refs torch/_strobelight (#145102) 2025-01-18 20:47:12 +00:00
_prims_common PEP585 update - mostly toplevels (#145178) 2025-01-22 02:21:14 +00:00
_refs PEP585 update - torch/_C torch/_decomp torch/_lazy torch/_library torch/_numpy torch/_prims torch/_refs torch/_strobelight (#145102) 2025-01-18 20:47:12 +00:00
_strobelight PEP585 update - torch/_C torch/_decomp torch/_lazy torch/_library torch/_numpy torch/_prims torch/_refs torch/_strobelight (#145102) 2025-01-18 20:47:12 +00:00
_subclasses Revert "Output of nonzero is transposed, fix fake tensor (#144695)" 2025-01-22 23:04:50 +00:00
_vendor
accelerator
amp PEP585 update - mostly toplevels (#145178) 2025-01-22 02:21:14 +00:00
ao PEP585 update - torch/ao (#145199) 2025-01-20 22:32:35 +00:00
autograd [compiled autograd] Always proxy autograd.Function nodes; handle AOT backwards (#143405) 2025-01-22 21:50:56 +00:00
backends [CUDA][cuBLAS] Add fp16 accumulate option to cuBLAS/cuBLASLt (#144441) 2025-01-22 22:42:48 +00:00
compiler [Doc] Add period at the end of the sentence (#145384) 2025-01-22 19:56:31 +00:00
contrib PEP585 update - mostly toplevels (#145178) 2025-01-22 02:21:14 +00:00
cpu
csrc [CUDA][cuBLAS] Add fp16 accumulate option to cuBLAS/cuBLASLt (#144441) 2025-01-22 22:42:48 +00:00
cuda PEP585 update - torch/_higher_order_ops torch/_subclasses torch/backends torch/compiler torch/cuda torch/masked torch/mtia torch/nested (#145202) 2025-01-20 22:37:26 +00:00
distributed PEP585 update - torch/distributed (#145164) 2025-01-21 04:23:29 +00:00
distributions Moved .all() checks for distributions to _is_all_true (#145029) 2025-01-18 07:55:48 +00:00
export Revert "[BE]: Simplify set add with set update (#145152)" 2025-01-22 22:14:26 +00:00
fft
func
futures PEP585 update - mostly toplevels (#145178) 2025-01-22 02:21:14 +00:00
fx Fix for failure in D68425364 (#145304) 2025-01-22 23:33:02 +00:00
jit PEP585 update - mostly toplevels (#145178) 2025-01-22 02:21:14 +00:00
legacy
lib
linalg
masked PEP585 update - torch/_higher_order_ops torch/_subclasses torch/backends torch/compiler torch/cuda torch/masked torch/mtia torch/nested (#145202) 2025-01-20 22:37:26 +00:00
monitor
mps [MPS] Support includes in metal objects (#145087) 2025-01-18 05:35:22 +00:00
mtia [S481486] [MTIA] Correct mtia.device_count() API (#145338) 2025-01-22 17:45:15 +00:00
multiprocessing
nested Implement backward for NJT matmul (#144587) 2025-01-21 18:27:50 +00:00
nn PEP585 update - torch/nn torch/optim torch/package torch/profiler torch/serialization torch/sparse torch/xpu (#145175) 2025-01-21 16:57:27 +00:00
onnx PEP585 update - torch/onnx (#145174) 2025-01-20 05:48:52 +00:00
optim PEP585 update - torch/nn torch/optim torch/package torch/profiler torch/serialization torch/sparse torch/xpu (#145175) 2025-01-21 16:57:27 +00:00
package PEP585 update - torch/nn torch/optim torch/package torch/profiler torch/serialization torch/sparse torch/xpu (#145175) 2025-01-21 16:57:27 +00:00
profiler PEP585 update - torch/nn torch/optim torch/package torch/profiler torch/serialization torch/sparse torch/xpu (#145175) 2025-01-21 16:57:27 +00:00
quantization
signal PEP585 update - mostly toplevels (#145178) 2025-01-22 02:21:14 +00:00
sparse PEP585 update - torch/nn torch/optim torch/package torch/profiler torch/serialization torch/sparse torch/xpu (#145175) 2025-01-21 16:57:27 +00:00
special
testing Reverting the PR adding Kleidiai-based int4 kernels (#145392) 2025-01-22 20:11:49 +00:00
utils [NVIDIA] Jetson Thor Blackwell Support codegen (#145395) 2025-01-22 20:13:19 +00:00
xpu PEP585 update - torch/nn torch/optim torch/package torch/profiler torch/serialization torch/sparse torch/xpu (#145175) 2025-01-21 16:57:27 +00:00
__config__.py
__future__.py
__init__.py PEP585 update - mostly toplevels (#145178) 2025-01-22 02:21:14 +00:00
_appdirs.py
_classes.py
_compile.py [BE] typing for decorators (#144161) 2025-01-04 16:40:09 +00:00
_custom_ops.py
_deploy.py
_environment.py
_guards.py PEP585 update - mostly toplevels (#145178) 2025-01-22 02:21:14 +00:00
_jit_internal.py PEP585 update - mostly toplevels (#145178) 2025-01-22 02:21:14 +00:00
_linalg_utils.py PEP585 update - mostly toplevels (#145178) 2025-01-22 02:21:14 +00:00
_lobpcg.py PEP585 update - mostly toplevels (#145178) 2025-01-22 02:21:14 +00:00
_lowrank.py PEP585 update - mostly toplevels (#145178) 2025-01-22 02:21:14 +00:00
_meta_registrations.py Reverting the PR adding Kleidiai-based int4 kernels (#145392) 2025-01-22 20:11:49 +00:00
_namedtensor_internals.py
_ops.py [fx] move DCE rand check to import time (#145118) 2025-01-22 02:23:02 +00:00
_python_dispatcher.py
_size_docs.py remove allow-untyped-defs from torch/_size_docs.py (#143942) 2024-12-29 01:00:46 +00:00
_sources.py PEP585 update - mostly toplevels (#145178) 2025-01-22 02:21:14 +00:00
_storage_docs.py
_streambase.py
_tensor.py PEP585 update - mostly toplevels (#145178) 2025-01-22 02:21:14 +00:00
_tensor_docs.py Update pin memory related APIs to not pass 'device' argument (#131858) 2025-01-15 17:23:35 +00:00
_tensor_str.py PEP585 update - mostly toplevels (#145178) 2025-01-22 02:21:14 +00:00
_thread_safe_fork.py
_torch_docs.py PEP585 update - mostly toplevels (#145178) 2025-01-22 02:21:14 +00:00
_utils.py PEP585 update - mostly toplevels (#145178) 2025-01-22 02:21:14 +00:00
_utils_internal.py PEP585 update - mostly toplevels (#145178) 2025-01-22 02:21:14 +00:00
_VF.py
_vmap_internals.py PEP585 update - mostly toplevels (#145178) 2025-01-22 02:21:14 +00:00
_weights_only_unpickler.py PEP585 update - mostly toplevels (#145178) 2025-01-22 02:21:14 +00:00
abi-check.cpp
CMakeLists.txt Revert "export AOTI_TORCH_EXPORT on Windows. (#140030)" 2025-01-06 18:15:52 +00:00
custom_class.h
custom_class_detail.h Enable readability-redundant-declaration (#143982) 2024-12-31 00:20:10 +00:00
extension.h
functional.py PEP585 update - mostly toplevels (#145178) 2025-01-22 02:21:14 +00:00
hub.py PEP585 update - mostly toplevels (#145178) 2025-01-22 02:21:14 +00:00
library.h Enable more readability-redundant checks (#143963) 2024-12-30 14:49:33 +00:00
library.py PEP585 update - mostly toplevels (#145178) 2025-01-22 02:21:14 +00:00
overrides.py PEP585 update - mostly toplevels (#145178) 2025-01-22 02:21:14 +00:00
py.typed
quasirandom.py
random.py PEP585 update - mostly toplevels (#145178) 2025-01-22 02:21:14 +00:00
README.txt
return_types.py
script.h
serialization.py PEP585 update - torch/nn torch/optim torch/package torch/profiler torch/serialization torch/sparse torch/xpu (#145175) 2025-01-21 16:57:27 +00:00
storage.py PEP585 update - mostly toplevels (#145178) 2025-01-22 02:21:14 +00:00
torch_version.py PEP585 update - mostly toplevels (#145178) 2025-01-22 02:21:14 +00:00
types.py PEP585 update - mostly toplevels (#145178) 2025-01-22 02:21:14 +00:00
version.py.tpl

Note [TH abstraction violation]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

TH/THC provide some hpp headers, which are proper C++ headers rather than
C headers.  These headers serve double duty as *internal implementation
detail* headers, whose contents should largely not be used by external
clients.

Ideally, we would not install these headers at all; instead, you should
use public functions (in headers like `THTensor.h`, NOT `THTensor.hpp`)
to manipulate these structs.  However, there are a few places
in torch/csrc where we violate this abstraction.  They are marked with
a pointer to this note.  Each of those sites will have to be refactored
when we refactor the guts of THTensor and related structures.