pytorch/torch
Yanbo Liang b4d6443bcf [Dynamo] Log innermost user frame filename & lineno for better error aggregation (#115899)
CompilationMetrics example:
```
frame_key='1',
co_name='fn',
co_filename='/data/users/ybliang/debug/debug1.py',
co_firstlineno=58,
cache_size=0,
accumulated_cache_size=0,
guard_count=None,
graph_op_count=None,
graph_node_count=None,
graph_input_count=None,
entire_frame_compile_time_s=None,
backend_compile_time_s=None,
fail_type="<class 'torch._dynamo.exc.Unsupported'>",
fail_reason='custome dict init with args/kwargs unimplemented',
fail_user_frame_filename='/data/users/ybliang/debug/debug1.py',
fail_user_frame_lineno=61
```
where:
* ```fail_type``` and ```fail_reason``` are exceptions inside of Dynamo.
* ```fail_user_frame_filename``` and ```fail_user_frame_lineno``` are where the original user code triggered the exception.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/115899
Approved by: https://github.com/davidberard98, https://github.com/ydwu4
2023-12-15 08:24:55 +00:00
..
_awaits
_C [c10d] Create a python c10d API _set_pg_timeout to set timeout (#115453) 2023-12-12 20:52:43 +00:00
_C_flatbuffer
_custom_op Allow functionalization to work with optional mutable (#114803) 2023-11-30 23:48:03 +00:00
_decomp [inductor] Updated upsample_bilinear2d decomposition (#104182) 2023-12-14 14:50:06 +00:00
_dispatch
_dynamo [Dynamo] Log innermost user frame filename & lineno for better error aggregation (#115899) 2023-12-15 08:24:55 +00:00
_export [export][reland] Remove runtime assertion pass (#115597) 2023-12-15 03:22:03 +00:00
_functorch Consider storage_changed for assigning alias_of_input in aot_autograd when computing differentiable outputs that alias each other (#115315) 2023-12-12 23:21:58 +00:00
_higher_order_ops Allow preserve_rng_state=True when torch.compile + selective checkpointing + CUDA (#113718) 2023-12-09 01:47:25 +00:00
_inductor Back out "[aotinductor] replace lld with the default ld linker (#115478)" (#115875) 2023-12-15 05:56:06 +00:00
_lazy
_library Refactor can_auto_functionalize (#115134) 2023-12-05 22:43:06 +00:00
_logging Sort the output of TORCH_LOGS=help (#114657) 2023-11-30 20:13:51 +00:00
_numpy [BE]: Enable a PLC0131, PLC0132, PLC0205. Fix PLC0132 bug. (#115015) 2023-12-02 20:35:10 +00:00
_prims Add support for torch.Generator type in TorchScript (#110413) 2023-11-21 23:07:21 +00:00
_prims_common [inductor] Allow sympy expressions to participate in type promotion (#115676) 2023-12-13 22:22:37 +00:00
_refs Add decomposition for torch.block_diag (#115096) 2023-12-11 20:04:22 +00:00
_subclasses Extend auto_functionalized to support ops that return Tensors (#115135) 2023-12-05 22:43:06 +00:00
_vendor vendor packaging.version (#114108) 2023-11-21 11:51:23 +00:00
amp Add Half support for CPU autocast on eager mode (#112484) 2023-11-21 20:08:28 +00:00
ao [quant][fx] Lower operator.matmul in convert_fx (#113954) 2023-12-12 00:34:58 +00:00
autograd [BC breaking] Remove check_sparse_nnz argument of gradcheck (#115658) 2023-12-13 17:34:30 +00:00
backends [MPS] Add MacOS 14 runtime check (#115512) 2023-12-11 21:11:42 +00:00
compiler
contrib
cpu
csrc Introduce 3 low-latency, intra-node allreduce algorithms for small messages to PyTorch (#114001) 2023-12-15 08:17:35 +00:00
cuda [BE] Set torch.cuda.has_half to True (#115884) 2023-12-15 02:30:55 +00:00
distributed Let all_reduce_coalesced accept one tensor as well (#115650) 2023-12-13 21:32:01 +00:00
distributions Fix hang in VonMises rejection sampling for small values of concentration (#114498) 2023-12-04 23:07:06 +00:00
export [export][reland] Remove runtime assertion pass (#115597) 2023-12-15 03:22:03 +00:00
fft
func
futures
fx [aotinductor] add no weight change version of fuse_parallel_linear (#115791) 2023-12-14 18:36:17 +00:00
jit [BE][Easy]: Apply RUF019: remove duplicate checks for dict access (#114478) 2023-11-29 00:14:02 +00:00
legacy
lib
linalg
masked make_fx can now SymIntify int inputs (#113452) 2023-11-18 06:39:09 +00:00
monitor
mps
multiprocessing Robustify torch.multiprocessing.spawn error reporting to be less deadlock prone (#114688) 2023-12-09 03:36:43 +00:00
nested Handle -1 in jagged layout NT view ops (#115843) 2023-12-15 00:42:47 +00:00
nn Fix numpy warning when importing torch without numpy installed (#115867) 2023-12-15 02:22:12 +00:00
onnx Store user model to simplify ONNXProgram.{adapt_torch_*,__call__} APIs (#115281) 2023-12-09 07:46:12 +00:00
optim Added More Information About Adadelta Optimizer (#106290) 2023-12-05 05:55:16 +00:00
package
profiler
quantization
signal
sparse [sparse][semi-structured] enable fp32 support, separate sparse and dense constraints (#115550) 2023-12-15 02:28:17 +00:00
special
testing [TEST] Skip test_schema_correctness for float8 dtype (#115757) 2023-12-15 06:26:46 +00:00
utils Revert "[ROCm] add hipblaslt support (#114329)" 2023-12-14 23:53:30 +00:00
__config__.py
__future__.py
__init__.py Add is_integer to SymFloat (#114703) 2023-12-07 23:23:53 +00:00
_appdirs.py
_classes.py
_compile.py
_custom_ops.py
_deploy.py
_guards.py Add Stateful/Stateless symbolic contexts, use fresh fake mode for dynamo backends (#113926) (#114526) 2023-11-26 23:40:32 +00:00
_jit_internal.py
_linalg_utils.py
_lobpcg.py
_lowrank.py
_meta_registrations.py Fix backward for SDPA NT jagged layout (#115576) 2023-12-12 18:35:40 +00:00
_namedtensor_internals.py
_ops.py torch.compile should auto-functionalize certain mutable ops (#114955) 2023-12-05 14:53:08 +00:00
_python_dispatcher.py
_sources.py
_storage_docs.py
_streambase.py
_tensor.py Make Float8 types serializeable (#114662) 2023-11-29 23:23:23 +00:00
_tensor_docs.py
_tensor_str.py Do not error when printing view created in no-grad modified in-place in no-grad (#113716) 2023-11-16 18:57:56 +00:00
_torch_docs.py Updated docs for deprecated torch.set_default_tensor_type (#115041) 2023-12-07 16:17:36 +00:00
_utils.py Make Float8 types serializeable (#114662) 2023-11-29 23:23:23 +00:00
_utils_internal.py [inductor][Observability] Add log for Optimus to enable easier debug (#110452) 2023-12-01 18:25:56 +00:00
_VF.py
_vmap_internals.py
_weights_only_unpickler.py Make Float8 types serializeable (#114662) 2023-11-29 23:23:23 +00:00
abi-check.cpp
CMakeLists.txt Revert "[Reland2] Update NVTX to NVTX3 (#109843)" 2023-12-05 16:10:20 +00:00
custom_class.h [Reland] [1/N] Fixes clang-tidy warnings in header files (#114668) 2023-11-29 07:11:51 +00:00
custom_class_detail.h
extension.h
functional.py make_fx can now SymIntify int inputs (#113452) 2023-11-18 06:39:09 +00:00
hub.py
library.h
library.py Optimize inspect.stack() call in caffe2/torch/library.py (#114700) 2023-11-29 20:54:02 +00:00
overrides.py Add python and C++ support for LPPool3d (#114199) 2023-12-08 18:18:44 +00:00
py.typed
quasirandom.py
random.py
README.txt
return_types.py [pytree] register pytree node type in both C++ pytree and Python pytree (#112111) 2023-11-28 11:41:38 +00:00
script.h
serialization.py
storage.py
torch_version.py vendor packaging.version (#114108) 2023-11-21 11:51:23 +00:00
types.py improve annotation device parameters where a device ordinal is allowed (#113647) 2023-11-17 14:41:22 +00:00
version.py.tpl

Note [TH abstraction violation]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

TH/THC provide some hpp headers, which are proper C++ headers rather than
C headers.  These headers serve double duty as *internal implementation
detail* headers, whose contents should largely not be used by external
clients.

Ideally, we would not install these headers at all; instead, you should
use public functions (in headers like `THTensor.h`, NOT `THTensor.hpp`)
to manipulate these structs.  However, there are a few places
in torch/csrc where we violate this abstraction.  They are marked with
a pointer to this note.  Each of those sites will have to be refactored
when we refactor the guts of THTensor and related structures.