pytorch/torch
albanD b81f1d1bee Speed up cpp extensions re-compilation (#104280)
Fixes https://github.com/pytorch/pytorch/issues/68066 to a large extend.

This is achieved by not touching files that don't need changing to make sure the ninja caching works as expected.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/104280
Approved by: https://github.com/fmassa
2023-06-28 17:06:07 +00:00
..
_awaits
_C DDP + C10D sparse all_reduce changes (#103916) (#104256) 2023-06-28 00:37:52 +00:00
_C_flatbuffer
_custom_op
_decomp [decomp] Add decomposition for torch.renorm (#103858) 2023-06-21 20:57:43 +00:00
_dispatch Reland of https://github.com/pytorch/pytorch/pull/101818 (#103888) 2023-06-21 21:00:56 +00:00
_dynamo [HigherOrderOp] Fall back on all new side effects in speculate_subgraph (#104077) 2023-06-28 14:20:37 +00:00
_export Handle custom higher order ops (#104285) 2023-06-28 01:53:36 +00:00
_functorch Revert "Re-enable low memory dropout (#103330)" 2023-06-28 04:27:37 +00:00
_higher_order_ops [HigherOrderOp] Remove _deprecated_global_ns from some ops (#104105) 2023-06-28 00:03:29 +00:00
_inductor Revert "Re-enable low memory dropout (#103330)" 2023-06-28 04:27:37 +00:00
_lazy
_logging Add graph break logging option instead of config flag (#103202) 2023-06-12 19:52:31 +00:00
_prims [HigherOrderOp] Remove _deprecated_global_ns from some ops (#104105) 2023-06-28 00:03:29 +00:00
_prims_common [decomp] Add decomposition for torch.renorm (#103858) 2023-06-21 20:57:43 +00:00
_refs [decomp] Add decomposition for torch.renorm (#103858) 2023-06-21 20:57:43 +00:00
_subclasses [pt2] grad support (#102264) 2023-06-21 10:13:09 +00:00
amp Fix missing mandatory device_type argument in autocast docstring (#97223) 2023-06-27 01:54:54 +00:00
ao [Quant][PT2E] Supported customized _EQUIVALENT_TYPES in Module Partition API (#102516) 2023-06-28 00:20:25 +00:00
autograd Deprecate "Type" and support more devices for save_on_cpu (#103245) 2023-06-09 05:05:01 +00:00
backends [BE] Deprecate has_XYZ attributes (#103279) 2023-06-10 05:17:17 +00:00
compiler torch.compiler public namespace (#102182) 2023-06-13 19:52:17 +00:00
contrib
cpu Quantization oneDNN backend only support VNNI CPU (#103653) 2023-06-19 09:50:07 +00:00
csrc sampled_addmm: backward performance improvements (#103544) 2023-06-28 08:49:54 +00:00
cuda [pt2] grad support (#102264) 2023-06-21 10:13:09 +00:00
distributed [FSDP] Check module.training for _root_cast_forward_inputs (#104223) 2023-06-28 16:38:01 +00:00
distributions Fix Dirichlet.log_prob() when x=0 and alpha=1 (#103605) 2023-06-15 16:16:50 +00:00
fft
func [pt2] grad support (#102264) 2023-06-21 10:13:09 +00:00
futures
fx Preserve all submodules/parameters/buffers when unpickle graph module (#104115) 2023-06-26 06:59:48 +00:00
jit Fix shape function for transpose convolution (#102139) 2023-06-21 17:50:56 +00:00
legacy
lib Use size_t in THManagedMapAllocator (#103331) 2023-06-13 04:50:30 +00:00
linalg [Doc] linalg.ldl_factor: render the Shape of tensor A (#99777) 2023-06-28 09:28:45 +00:00
masked Fix autograd issue with identity conversions (#92022) 2023-06-21 21:23:03 +00:00
monitor
mps [doc] Improve mps package description (#104184) 2023-06-27 15:50:36 +00:00
multiprocessing
nested
nn Change nn.Module.__getattr__ return type to Any (#104321) 2023-06-28 16:14:36 +00:00
onnx [onnx] Convert aten::flatten with 0d input to onnx Reshape and 1d to Identity (#104089) 2023-06-28 17:01:43 +00:00
optim Fix lr_scheduler serialization contains bound methods issue (#102627) 2023-06-23 03:53:15 +00:00
package
profiler [PyPer][ET] Refactor EG to ET (#99694) 2023-06-22 19:41:54 +00:00
quantization
signal
sparse [core][pruning][sparse][feature] SparseSemiStructured tensor subclass (#102135) 2023-06-27 19:21:06 +00:00
special
testing enable ASAN on some tests (#103647) 2023-06-28 02:17:14 +00:00
utils Speed up cpp extensions re-compilation (#104280) 2023-06-28 17:06:07 +00:00
__config__.py
__future__.py
__init__.py Make torch.empty* deterministic by filling with NaN or max int value (#101849) 2023-06-21 02:53:22 +00:00
_appdirs.py
_classes.py
_deploy.py
_guards.py Lift user defined attributes into inputs for certain cases (user defined types and tensors) (#103386) 2023-06-20 23:45:19 +00:00
_jit_internal.py default should be used as default value in boolean_dispatch (#103463) 2023-06-14 03:16:31 +00:00
_linalg_utils.py
_lobpcg.py
_lowrank.py
_meta_registrations.py REDO of dropout support for mem eff #102038 (#103704) 2023-06-26 23:05:03 +00:00
_namedtensor_internals.py
_ops.py Raise AttributeError in _OpsNamespace if __self__ attribute is requested (#104096) 2023-06-27 01:42:06 +00:00
_python_dispatcher.py
_sources.py
_storage_docs.py
_tensor.py This extra message would have helped with Wav2Vec2 debugging. (#103002) 2023-06-06 04:28:16 +00:00
_tensor_docs.py Added is_xla (#103100) 2023-06-22 23:31:04 +00:00
_tensor_str.py
_torch_docs.py Make torch.empty* deterministic by filling with NaN or max int value (#101849) 2023-06-21 02:53:22 +00:00
_utils.py fix hpu storage serialization (#101680) 2023-06-21 21:19:49 +00:00
_utils_internal.py
_VF.py
_vmap_internals.py
_weights_only_unpickler.py
abi-check.cpp
CMakeLists.txt enable more ASAN tests (#101483) 2023-06-15 05:21:15 +00:00
custom_class.h
custom_class_detail.h
extension.h
functional.py
hub.py
library.h
library.py
overrides.py Remove redundant dummy overrides (#103992) 2023-06-28 01:59:56 +00:00
py.typed
quasirandom.py
random.py
README.txt
return_types.py
script.h
serialization.py Add docstring to torch.serialization.register_package (#104046) 2023-06-26 23:28:32 +00:00
storage.py fix hpu storage serialization (#101680) 2023-06-21 21:19:49 +00:00
torch_version.py
types.py
version.py.tpl

Note [TH abstraction violation]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

TH/THC provide some hpp headers, which are proper C++ headers rather than
C headers.  These headers serve double duty as *internal implementation
detail* headers, whose contents should largely not be used by external
clients.

Ideally, we would not install these headers at all; instead, you should
use public functions (in headers like `THTensor.h`, NOT `THTensor.hpp`)
to manipulate these structs.  However, there are a few places
in torch/csrc where we violate this abstraction.  They are marked with
a pointer to this note.  Each of those sites will have to be refactored
when we refactor the guts of THTensor and related structures.