pytorch/torch
Yanbo Liang fbfb9a1648 [Dynamo] Improve PT2 fbcode logging observability (#106932)
Summary:
https://docs.google.com/document/d/1D5K3_ELsda3tIUeSyNL_2yee-M3jVWbirqSQ5BDNvHQ/edit

This is the revamped version of D47908299.

For each frame, we will record a list of compilation metrics: e.g, backend_compile time, entire_frame_compile time, cache_size, co_filename, co_firstlineno, co_name, guards, graph input_count, graph node_count, graph op_count.

With the help of job info: mast_job_name, global_rank, we can satisfy the requirements from `Things I’ve used/wanted to use our logging to determine` in https://docs.google.com/document/d/1D5K3_ELsda3tIUeSyNL_2yee-M3jVWbirqSQ5BDNvHQ/edit (or add more metrics for this framework)

Test Plan:
```
buck2 test //caffe2/test:test_dynamo
```

Differential Revision: D48142400

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106932
Approved by: https://github.com/anijain2305
2023-08-11 20:46:04 +00:00
..
_awaits
_C Revert "Add initial support for FP8 ONNX export (#106379)" 2023-08-11 18:22:35 +00:00
_C_flatbuffer
_custom_op [custom_ops] extend impl_abstract to work with existing torch.library ops (#106088) 2023-08-08 13:53:20 +00:00
_decomp Implement decomposition for aten.rrelu_with_noise (#106812) 2023-08-11 19:18:29 +00:00
_dispatch Fix some fake mode confusion between inner/outer fake mode in export (#106515) 2023-08-04 15:42:23 +00:00
_dynamo [Dynamo] Improve PT2 fbcode logging observability (#106932) 2023-08-11 20:46:04 +00:00
_export Revert "[export] Refactor constrain_as_value and constrain_as_size (#106591)" 2023-08-11 16:37:47 +00:00
_functorch [pt2] support vmap (#101707) 2023-08-09 03:39:33 +00:00
_higher_order_ops [quant][pt2e][fix] Remove the requirement of using no_grad for reference model that contains quantized conv2d (#106924) 2023-08-10 19:16:10 +00:00
_inductor Skip Triton templates in MM max autotune with zero-size inputs (#106865) 2023-08-11 19:10:16 +00:00
_lazy
_logging [dynamo, logging] add default pt2 logging group (#106417) 2023-08-04 20:34:42 +00:00
_numpy NumPy support in torch.compile (#106211) 2023-08-11 00:39:32 +00:00
_prims Generate mypy hints for torch.Tag, add a couple of pointwise ops (#106910) 2023-08-10 05:12:27 +00:00
_prims_common Remove dynamo+nvfuser (#105789) 2023-08-08 22:29:32 +00:00
_refs [pt2] Add reference implementations of torch.{stft,istft} (#106400) 2023-08-07 20:59:30 +00:00
_subclasses Generate mypy hints for torch.Tag, add a couple of pointwise ops (#106910) 2023-08-10 05:12:27 +00:00
amp Apply UFMT to low traffic torch modules (#106249) 2023-07-29 23:37:30 +00:00
ao [pytorch][ao] Add torch.matmul in FloatFunctional/QFunctional (#106831) 2023-08-10 22:43:36 +00:00
autograd _force_original_view_tracking to work as both context manager and function (#106706) 2023-08-07 23:29:22 +00:00
backends Allow setting TORCH_LINALG_PREFER_CUSOLVER=1 to prefer cusolver as linear algebra library globally (#106226) 2023-07-30 09:38:46 +00:00
compiler
contrib
cpu Apply UFMT to low traffic torch modules (#106249) 2023-07-29 23:37:30 +00:00
csrc Revert "Add initial support for FP8 ONNX export (#106379)" 2023-08-11 18:22:35 +00:00
cuda MemoryViz.js: format, move style (#106482) 2023-08-03 00:42:13 +00:00
distributed AsyncCollectiveTensor: dont sync on view ops (#105240) 2023-08-11 19:20:25 +00:00
distributions Expose intended public constraints. Fixes #106386 (#106458) 2023-08-04 23:20:59 +00:00
fft
func [pt2] support vmap (#101707) 2023-08-09 03:39:33 +00:00
futures
fx Revert "[export] Refactor constrain_as_value and constrain_as_size (#106591)" 2023-08-11 16:37:47 +00:00
jit torch.jit.script escape hatch (#106229) 2023-08-11 18:24:46 +00:00
legacy
lib
linalg [DocString] Fix incorrect api Examples (#105911) 2023-07-25 13:03:06 +00:00
masked [BE]: Update Ruff to 0.0.280 (#105724) 2023-07-22 23:03:34 +00:00
monitor
mps [MPS] Introduce torch.mps.Event() APIs (#102121) 2023-08-08 03:45:45 +00:00
multiprocessing Apply UFMT to low traffic torch modules (#106249) 2023-07-29 23:37:30 +00:00
nested
nn [pytorch] Disable fast path in MultiheadAttention in Export (#106824) 2023-08-10 00:18:37 +00:00
onnx Revert "Add initial support for FP8 ONNX export (#106379)" 2023-08-11 18:22:35 +00:00
optim Add _foreach_clamp (#106574) 2023-08-10 05:26:09 +00:00
package [BE]: Update Ruff to 0.0.280 (#105724) 2023-07-22 23:03:34 +00:00
profiler [profiler] add profiler parsing support for custom device. (#106142) 2023-08-02 20:23:22 +00:00
quantization Apply UFMT to low traffic torch modules (#106249) 2023-07-29 23:37:30 +00:00
signal
sparse Revert "[core][pruning][feature] cuSPARSELt kernels and ops (#102133)" 2023-08-09 16:03:14 +00:00
special
testing [test_nn] add custom device support for dropout tests、lazy_modules te… (#106609) 2023-08-11 09:14:34 +00:00
utils asarray: take the default device into consideration. (#106779) 2023-08-11 13:16:42 +00:00
__config__.py
__future__.py
__init__.py Fix some typos, mostly "that that" (#106901) 2023-08-10 19:46:53 +00:00
_appdirs.py
_classes.py
_compile.py
_custom_ops.py [custom_ops] extend impl_abstract to work with existing torch.library ops (#106088) 2023-08-08 13:53:20 +00:00
_deploy.py
_guards.py Fix some fake mode confusion between inner/outer fake mode in export (#106515) 2023-08-04 15:42:23 +00:00
_jit_internal.py
_linalg_utils.py
_lobpcg.py
_lowrank.py
_meta_registrations.py Revert "[export] Refactor constrain_as_value and constrain_as_size (#106591)" 2023-08-11 16:37:47 +00:00
_namedtensor_internals.py
_ops.py
_python_dispatcher.py
_sources.py
_storage_docs.py
_tensor.py
_tensor_docs.py Modify signature for tensor.tile in doc (#106295) 2023-08-01 19:51:52 +00:00
_tensor_str.py
_torch_docs.py asarray: take the default device into consideration. (#106779) 2023-08-11 13:16:42 +00:00
_utils.py Back out "Reland "Make adding buffers more like adding parameters (#104069)" (#106224)" (#106743) 2023-08-08 15:27:34 +00:00
_utils_internal.py [Dynamo] Improve PT2 fbcode logging observability (#106932) 2023-08-11 20:46:04 +00:00
_VF.py
_vmap_internals.py
_weights_only_unpickler.py
abi-check.cpp
CMakeLists.txt
custom_class.h
custom_class_detail.h
extension.h
functional.py fix torch.norm for custom device (#106198) 2023-08-02 06:25:52 +00:00
hub.py
library.h
library.py Enable registering fallthroughs to (op, dk) from torch.library (#106086) 2023-07-28 19:37:59 +00:00
overrides.py Revert "[export] Refactor constrain_as_value and constrain_as_size (#106591)" 2023-08-11 16:37:47 +00:00
py.typed
quasirandom.py
random.py
README.txt
return_types.py
script.h
serialization.py [easy] Minor torch.load docs fix (#105876) 2023-07-25 03:58:30 +00:00
storage.py
torch_version.py
types.py
version.py.tpl

Note [TH abstraction violation]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

TH/THC provide some hpp headers, which are proper C++ headers rather than
C headers.  These headers serve double duty as *internal implementation
detail* headers, whose contents should largely not be used by external
clients.

Ideally, we would not install these headers at all; instead, you should
use public functions (in headers like `THTensor.h`, NOT `THTensor.hpp`)
to manipulate these structs.  However, there are a few places
in torch/csrc where we violate this abstraction.  They are marked with
a pointer to this note.  Each of those sites will have to be refactored
when we refactor the guts of THTensor and related structures.