pytorch/torch
Rohan Varma 8acc92eb00 [FSDP] Print exec order only in debug mode (#83868)
Since exec order warning can result in very long module name print out, gating this only to be printing in debug mode. Oftentimes such as in multiModal training, there is not a lot we can do about this warning since some modules go unused in certain iterations.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83868
Approved by: https://github.com/awgu
2022-08-29 17:10:25 +00:00
..
_C Revert "[ONNX] Export node and value with scope name (#82040)" 2022-08-29 06:36:18 +00:00
_C_flatbuffer
_decomp [Prim] Implement group_norm_backward (#84037) 2022-08-29 09:29:30 +00:00
_dispatch Torch cond operator, python dispatch, pyoperator (#83154) 2022-08-25 20:11:53 +00:00
_lazy [LTC] Add custom lazy tensor save function (#83294) 2022-08-24 15:35:43 +00:00
_masked Fix use-dict-literal lint (#83718) 2022-08-24 00:26:46 +00:00
_prims Add nvprims.var_mean (#83508) 2022-08-28 18:45:25 +00:00
_prims_common Add nvprims.var_mean (#83508) 2022-08-28 18:45:25 +00:00
_refs Add nvprims.var_mean (#83508) 2022-08-28 18:45:25 +00:00
_subclasses Revert "Don't introduce new overload for SymInt (#83628)" 2022-08-27 01:23:17 +00:00
amp
ao [quant][ao_migration] torch.nn.qattorch.ao.nn.qat (#78716) 2022-08-25 16:50:38 +00:00
autograd More doctest refinements. (#83317) 2022-08-22 20:07:26 +00:00
backends More doctest refinements. (#83317) 2022-08-22 20:07:26 +00:00
contrib
cpu
csrc [Profiler] Add disabled and global methods to ProfilerConfig. (#83891) 2022-08-29 08:56:54 +00:00
cuda
distributed [FSDP] Print exec order only in debug mode (#83868) 2022-08-29 17:10:25 +00:00
distributions More doctest refinements. (#83317) 2022-08-22 20:07:26 +00:00
fft
futures More doctest refinements. (#83317) 2022-08-22 20:07:26 +00:00
fx [fx][pass] Fix type of exception (#84094) 2022-08-29 16:55:59 +00:00
jit [quant][ao_migration] torch.nn.quantized.dynamictorch.ao.nn.quantized.dynamic (#78714) 2022-08-25 16:50:34 +00:00
legacy
lib
linalg Strenghten preconditions of linalg.cross (#83798) 2022-08-24 15:17:12 +00:00
masked [maskedtensor] adding unary and binary operations (#82837) 2022-08-22 21:00:38 +00:00
monitor More doctest refinements. (#83317) 2022-08-22 20:07:26 +00:00
multiprocessing
nested
nn conv2d: require bias to have the same dtype as input and weight on cpu (#83686) 2022-08-29 16:41:17 +00:00
onnx Revert "[ONNX] Export node and value with scope name (#82040)" 2022-08-29 06:36:18 +00:00
optim [optim] rprop: handle complex params as independent real params (#83858) 2022-08-23 08:39:35 +00:00
package
profiler [Profiler][Minor] Extend Python bindings (#83622) 2022-08-26 20:03:24 +00:00
quantization Add Custom Module Support List (#82606) 2022-08-03 17:48:51 +00:00
sparse
special
testing conv2d: require bias to have the same dtype as input and weight on cpu (#83686) 2022-08-29 16:41:17 +00:00
utils nit fixes in modes (#83924) 2022-08-29 15:27:04 +00:00
__config__.py
__future__.py
__init__.py
_appdirs.py
_classes.py
_deploy.py
_jit_internal.py
_linalg_utils.py
_lobpcg.py
_lowrank.py
_meta_registrations.py Make linalg.inv composite of linalg.solve (#80074) 2022-08-25 09:28:55 +00:00
_namedtensor_internals.py
_ops.py
_python_dispatcher.py
_six.py
_sources.py
_storage_docs.py
_tensor.py
_tensor_docs.py
_tensor_str.py More doctest refinements. (#83317) 2022-08-22 20:07:26 +00:00
_torch_docs.py More doctest refinements. (#83317) 2022-08-22 20:07:26 +00:00
_utils.py
_utils_internal.py
_VF.py
_vmap_internals.py
abi-check.cpp
CMakeLists.txt Enable -Wunused-local-typedefs (#83708) 2022-08-26 15:45:47 +00:00
custom_class.h
custom_class_detail.h
deploy.h
extension.h
functional.py More doctest refinements. (#83317) 2022-08-22 20:07:26 +00:00
hub.py Add type hints to torch.save, torch.load (#83937) 2022-08-26 18:58:25 +00:00
library.h
library.py
overrides.py NestedTensor Softmax (#83435) 2022-08-17 21:57:42 +00:00
py.typed
quasirandom.py
random.py
README.txt
return_types.py
script.h
serialization.py Add type hints to torch.save, torch.load (#83937) 2022-08-26 18:58:25 +00:00
storage.py
torch_version.py
types.py

Note [TH abstraction violation]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

TH/THC provide some hpp headers, which are proper C++ headers rather than
C headers.  These headers serve double duty as *internal implementation
detail* headers, whose contents should largely not be used by external
clients.

Ideally, we would not install these headers at all; instead, you should
use public functions (in headers like `THTensor.h`, NOT `THTensor.hpp`)
to manipulate these structs.  However, there are a few places
in torch/csrc where we violate this abstraction.  They are marked with
a pointer to this note.  Each of those sites will have to be refactored
when we refactor the guts of THTensor and related structures.