pytorch/torch
rzou 5e0ef84b01 [dynamo] Refactor install_global_once, remove usages of install_global_unsafe (#118100)
We split install_global_once into two APIs:
- `install_global_by_id(prefix, value) -> name`: installs a global if it hasn't
been installed yet
- `install_global(prefix, value) -> name`: always installs the global (and
  generates a unique name for it)

Then, we refactor most callsites of `install_global_unsafe` to one of
the previous. Some callsites cannot be refactored because we create the
global name first, do a lot of stuff with it, and then install it.

This fixes more test flakiness.

Test Plan:
- Existing tests; I can't reliably repro the flakiness
Pull Request resolved: https://github.com/pytorch/pytorch/pull/118100
Approved by: https://github.com/ezyang, https://github.com/mlazos
2024-01-24 23:25:44 +00:00
..
_awaits
_C additional support for float8_e4m3fnuz and _e5m2fnuz (#115214) 2024-01-22 18:33:41 +00:00
_C_flatbuffer
_custom_op [inductor][custom ops] Add tag to custom ops to preserve stride orders in inductor (#117298) 2024-01-21 18:47:01 +00:00
_decomp Revert "accelerate binary_cross_entropy_with_logits by using log_sigmoid operator (#115539)" 2024-01-22 14:48:35 +00:00
_dispatch
_dynamo [dynamo] Refactor install_global_once, remove usages of install_global_unsafe (#118100) 2024-01-24 23:25:44 +00:00
_export [sigmoid] Add canonicalized IR as an option. (#116758) 2024-01-24 03:11:25 +00:00
_functorch Enhance torch.vmap support from inside torch.compile (#116050) 2024-01-22 17:53:45 +00:00
_higher_order_ops Initial torchbind support in PT2 (#117697) 2024-01-19 06:28:20 +00:00
_inductor [AOTI] Support .item() in the ABI-compatible mode (#117989) 2024-01-24 20:17:59 +00:00
_lazy
_library
_logging [export] Add TORCH_LOGS=export (#116993) 2024-01-11 03:02:23 +00:00
_numpy [dynamo] Fix np.issubdtype (#116459) 2024-01-05 01:48:07 +00:00
_prims
_prims_common [BE]: Add type alias typing annotation to prims_common (#117928) 2024-01-21 14:26:59 +00:00
_refs
_subclasses Ban mutation on dropout outputs in export (#117879) 2024-01-21 04:53:40 +00:00
_vendor
amp
ao [Quant] [PT2] Add Hardswish into X86InductorQuantizer Conv2d Unary Annotation (#117488) 2024-01-20 01:37:33 +00:00
autograd cleanup code comments _compute_numerical_gradient (#117484) 2024-01-19 18:51:52 +00:00
backends Remove sdp_kernel and replace with sdpa_kernel in attention namespace (#114689) 2024-01-24 22:28:04 +00:00
compiler Add a wrapper to transform a NumPy function into a PyTorch function (#114610) 2024-01-02 18:35:29 +00:00
contrib
cpu
csrc Remove sdp_kernel and replace with sdpa_kernel in attention namespace (#114689) 2024-01-24 22:28:04 +00:00
cuda Try creating a bf16 tensor as a last resort of is_bf16_supported(). (#115924) 2024-01-01 01:15:30 +00:00
distributed [dtensor] rewrite embedding ops using op strategy (#118079) 2024-01-24 19:12:12 +00:00
distributions
export Added type checking for ExportedProgram (#117231) 2024-01-24 18:24:44 +00:00
fft
func
futures
fx [fx] add an option to not retrace when doing op fusion (#118120) 2024-01-24 09:41:26 +00:00
jit [BE]: Update flake8 to v6.1.0 and fix lints (#116591) 2024-01-03 06:04:44 +00:00
legacy
lib
linalg
masked
monitor
mps
multiprocessing
nested Remove sdp_kernel and replace with sdpa_kernel in attention namespace (#114689) 2024-01-24 22:28:04 +00:00
nn Remove sdp_kernel and replace with sdpa_kernel in attention namespace (#114689) 2024-01-24 22:28:04 +00:00
onnx [ONNX] Improve support to mmap for ONNXProgram.save (#117863) 2024-01-23 02:00:00 +00:00
optim Add guardrails preventing complex params in LBFGS & SparseAdam (#118161) 2024-01-24 21:22:47 +00:00
package [BE]: Add better handling of pathlib.Path with os calls (#116564) 2023-12-31 01:46:03 +00:00
profiler
quantization
signal Fix NaN bug in torch.signal.windows.kaiser (#116470) 2024-01-08 22:24:52 +00:00
sparse Update F32 sparse semi-structured support for CUTLASS back-end (#116017) 2023-12-22 16:53:04 +00:00
special
testing Check if enable inside run call (#118101) 2024-01-24 22:38:41 +00:00
utils Enhance torch.vmap support from inside torch.compile (#116050) 2024-01-22 17:53:45 +00:00
__config__.py
__future__.py
__init__.py [dynamo] Added dyn shapes support for math trigo ops: sin(h), cos(h), tan(h) ... (#114866) 2024-01-11 11:52:28 +00:00
_appdirs.py
_classes.py
_compile.py
_custom_ops.py
_deploy.py
_guards.py
_jit_internal.py
_linalg_utils.py
_lobpcg.py
_lowrank.py
_meta_registrations.py additional support for float8_e4m3fnuz and _e5m2fnuz (#115214) 2024-01-22 18:33:41 +00:00
_namedtensor_internals.py
_ops.py Simplify kwargs propagation in __call__. (#117880) 2024-01-20 19:29:35 +00:00
_python_dispatcher.py
_sources.py
_storage_docs.py
_streambase.py
_tensor.py additional support for float8_e4m3fnuz and _e5m2fnuz (#115214) 2024-01-22 18:33:41 +00:00
_tensor_docs.py Pyi doc inclusion + fix (#117267) 2024-01-15 13:06:53 +00:00
_tensor_str.py
_torch_docs.py Pyi doc inclusion + fix (#117267) 2024-01-15 13:06:53 +00:00
_utils.py pre_dispatch aot_export (#115188) 2023-12-25 04:51:21 +00:00
_utils_internal.py
_VF.py
_vmap_internals.py
_weights_only_unpickler.py additional support for float8_e4m3fnuz and _e5m2fnuz (#115214) 2024-01-22 18:33:41 +00:00
abi-check.cpp
CMakeLists.txt [BE] [cuDNN] Always build assuming cuDNN >= 8.1 (#95722) 2024-01-03 15:41:28 +00:00
custom_class.h
custom_class_detail.h
extension.h
functional.py
hub.py Increase hub download chunk size (#116536) 2024-01-03 17:38:45 +00:00
library.h
library.py
overrides.py Introduce slice_inverse() op (#117041) 2024-01-16 23:44:54 +00:00
py.typed
quasirandom.py
random.py
README.txt
return_types.py [pytree] reuse flatten_fn in flatten_with_keys_fn to ensure consistency (#117656) 2024-01-17 20:38:49 +00:00
script.h
serialization.py [BE]: Use os.fspath and os.PathLike in torch serialization (#116562) 2023-12-30 20:53:10 +00:00
storage.py additional support for float8_e4m3fnuz and _e5m2fnuz (#115214) 2024-01-22 18:33:41 +00:00
torch_version.py
types.py
version.py.tpl

Note [TH abstraction violation]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

TH/THC provide some hpp headers, which are proper C++ headers rather than
C headers.  These headers serve double duty as *internal implementation
detail* headers, whose contents should largely not be used by external
clients.

Ideally, we would not install these headers at all; instead, you should
use public functions (in headers like `THTensor.h`, NOT `THTensor.hpp`)
to manipulate these structs.  However, there are a few places
in torch/csrc where we violate this abstraction.  They are marked with
a pointer to this note.  Each of those sites will have to be refactored
when we refactor the guts of THTensor and related structures.