pytorch/torch
Xuehai Pan 93a33bf3ac [BE] update type annotations for basic utilities in torch/__init__.py (#129001)
Changes:

1. Make some arguments positional-only as we only support Python 3.8+
2. Clean up `torch.typename(obj)` implementation.
3. Update type annotations., especially `is_tensor()` and `is_masked_tensor()` using `TypeGuard`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/129001
Approved by: https://github.com/malfet
2024-06-24 18:04:38 +00:00
..
_awaits
_C [BE] update type annotations for basic utilities in torch/__init__.py (#129001) 2024-06-24 18:04:38 +00:00
_C_flatbuffer
_custom_op Rename impl_abstract to register_fake, part 2/2 (#123938) 2024-06-14 14:37:24 +00:00
_decomp Fix weight_norm decomposition behavior (#128956) 2024-06-18 21:24:12 +00:00
_dispatch
_dynamo [Traceable FSDP2] Add aot_eager backend E2E tests for transformer model (#129157) 2024-06-23 06:11:11 +00:00
_export [ts migration] Support prim::tolist and aten::len (#128894) 2024-06-18 19:11:07 +00:00
_functorch [Brian's PR #128754] Use torch.ops.fsdp.set_ for FSDP2 storage resize; dont functionalize resize_, set_, split_with_sizes_copy.out (#129203) 2024-06-23 06:07:19 +00:00
_higher_order_ops [BE] update type annotations for basic utilities in torch/__init__.py (#129001) 2024-06-24 18:04:38 +00:00
_inductor Revert "[halide-backend] Initial implementation of HalideKernel and HalideScheduling (#126417)" 2024-06-24 16:50:15 +00:00
_lazy
_library [BE] enable UFMT for torch/nn/*.py (#128593) 2024-06-23 16:05:13 +00:00
_logging [FSDP2] Add 'TORCH_LOGS=+fsdp' to log hooks(pre/post forward/backward) and FQN (_init_fqns) (#128663) 2024-06-21 23:25:58 +00:00
_numpy
_prims
_prims_common
_refs Fix exp decomp numerics (#129154) 2024-06-21 03:21:30 +00:00
_strobelight
_subclasses [subclasses] Handle dynamo inputs that are subclass views with (-1) in the view (#128662) 2024-06-15 14:58:18 +00:00
_vendor
amp add xpu for amp (#127276) 2024-06-20 21:49:35 +00:00
ao Fixing equalize with three things and improving functionality (#124632) 2024-06-20 16:55:56 +00:00
autograd [Profiler] Clean up use_mtia to follow standard use_device instead (#126284) 2024-06-18 21:01:03 +00:00
backends Enable deterministic support for oneDNN (#127277) 2024-06-21 05:21:24 +00:00
compiler
contrib
cpu [inductor][cpp] BF16 AMX micro-gemm support (#127195) 2024-06-21 07:21:47 +00:00
csrc [Brian's PR #128754] Use torch.ops.fsdp.set_ for FSDP2 storage resize; dont functionalize resize_, set_, split_with_sizes_copy.out (#129203) 2024-06-23 06:07:19 +00:00
cuda Document the torch.cuda.profiler.profile function (#128216) 2024-06-17 23:42:40 +00:00
distributed [FSDP2] Fixed unshard without lazy init (#129241) 2024-06-24 13:31:54 +00:00
distributions [BE]: Update mypy to 1.10.0 (#127717) 2024-06-13 15:57:13 +00:00
export [export] copy sym ops when respecting call module signature (#129153) 2024-06-21 01:40:22 +00:00
fft
func
futures
fx [Brian's PR #128754] Use torch.ops.fsdp.set_ for FSDP2 storage resize; dont functionalize resize_, set_, split_with_sizes_copy.out (#129203) 2024-06-23 06:07:19 +00:00
jit Fix export log script (#128967) 2024-06-20 17:01:00 +00:00
legacy
lib
linalg
masked [BE] update type annotations for basic utilities in torch/__init__.py (#129001) 2024-06-24 18:04:38 +00:00
monitor
mps Add support in Python API for the recommended max working set size. (#128289) 2024-06-12 16:03:57 +00:00
mtia [MTIA] Fix synchronize API (#128714) 2024-06-17 21:58:46 +00:00
multiprocessing expose set_thread_name to Python and set thread names (#128448) 2024-06-13 16:38:23 +00:00
nested Backward support for unbind() with NJT (#128032) 2024-06-21 14:05:23 +00:00
nn [BE] update type annotations for basic utilities in torch/__init__.py (#129001) 2024-06-24 18:04:38 +00:00
onnx Remove Caffe2 handling from onnx_unpack_quantized_weights (#129021) 2024-06-21 06:16:44 +00:00
optim Optim package docstring fix (#129086) 2024-06-21 14:30:53 +00:00
package
profiler [Profiler] Clean up use_mtia to follow standard use_device instead (#126284) 2024-06-18 21:01:03 +00:00
quantization
signal
sparse
special
testing [MPS] Fused Adam & AdamW (#127242) 2024-06-18 19:59:50 +00:00
utils Allow SAC policy_fn to return bool for backward compatibility (#129262) 2024-06-24 13:54:30 +00:00
xpu
__config__.py
__future__.py
__init__.py [BE] update type annotations for basic utilities in torch/__init__.py (#129001) 2024-06-24 18:04:38 +00:00
_appdirs.py
_classes.py
_compile.py
_custom_ops.py
_deploy.py
_guards.py Evaluate symexprs on load path of cache not write (#128997) 2024-06-20 08:55:12 +00:00
_jit_internal.py [BE] enable UFMT for torch/nn/*.py (#128593) 2024-06-23 16:05:13 +00:00
_linalg_utils.py [BE] enable UFMT for torch/nn/*.py (#128593) 2024-06-23 16:05:13 +00:00
_lobpcg.py [BE] enable UFMT for top-level files torch/*.py (#127707) 2024-06-12 20:15:05 +00:00
_lowrank.py [BE] enable UFMT for torch/nn/*.py (#128593) 2024-06-23 16:05:13 +00:00
_meta_registrations.py [BE] enable UFMT for torch/nn/*.py (#128593) 2024-06-23 16:05:13 +00:00
_namedtensor_internals.py
_ops.py Torchbind call method + effects support (#128397) 2024-06-14 21:28:17 +00:00
_python_dispatcher.py
_size_docs.py
_sources.py
_storage_docs.py
_streambase.py
_tensor.py [BE] explicitly export subpackage torch.utils (#128342) 2024-06-13 04:39:16 +00:00
_tensor_docs.py
_tensor_str.py
_torch_docs.py Update torch.nanmean() docstring to mention input dtype requirement (#128155) 2024-06-12 17:46:36 +00:00
_utils.py [BE] enable UFMT for torch/nn/*.py (#128593) 2024-06-23 16:05:13 +00:00
_utils_internal.py [BE] enable UFMT for torch/nn/*.py (#128593) 2024-06-23 16:05:13 +00:00
_VF.py
_vmap_internals.py [BE] enable UFMT for torch/nn/*.py (#128593) 2024-06-23 16:05:13 +00:00
_weights_only_unpickler.py
abi-check.cpp
CMakeLists.txt [Split Build] Fix libtorch_python RPATH (#129088) 2024-06-21 06:49:19 +00:00
custom_class.h
custom_class_detail.h Revert "[1/N] Change #include <c10/util/Optional.h> to #include <optional> (#128301)" 2024-06-15 01:58:20 +00:00
extension.h
functional.py [BE] enable UFMT for torch/nn/*.py (#128593) 2024-06-23 16:05:13 +00:00
hub.py [BE] enable UFMT for torch/nn/*.py (#128593) 2024-06-23 16:05:13 +00:00
library.h Revert "[1/N] Change #include <c10/util/Optional.h> to #include <optional> (#128301)" 2024-06-15 01:58:20 +00:00
library.py [BE] enable UFMT for torch/nn/*.py (#128593) 2024-06-23 16:05:13 +00:00
overrides.py [BE] enable UFMT for torch/nn/*.py (#128593) 2024-06-23 16:05:13 +00:00
py.typed
quasirandom.py [BE] enable UFMT for top-level files torch/*.py (#127707) 2024-06-12 20:15:05 +00:00
random.py [BE] enable UFMT for top-level files torch/*.py (#127707) 2024-06-12 20:15:05 +00:00
README.txt
return_types.py [BE] enable UFMT for top-level files torch/*.py (#127707) 2024-06-12 20:15:05 +00:00
script.h
serialization.py [BE] enable UFMT for torch/nn/*.py (#128593) 2024-06-23 16:05:13 +00:00
storage.py Fix Storage.filename to not track the filename when storage was mmap-ed with MAP_PRIVATE (#128725) 2024-06-17 18:55:47 +00:00
torch_version.py [BE] enable UFMT for top-level files torch/*.py (#127707) 2024-06-12 20:15:05 +00:00
types.py [BE] update type annotations for basic utilities in torch/__init__.py (#129001) 2024-06-24 18:04:38 +00:00
version.py.tpl

Note [TH abstraction violation]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

TH/THC provide some hpp headers, which are proper C++ headers rather than
C headers.  These headers serve double duty as *internal implementation
detail* headers, whose contents should largely not be used by external
clients.

Ideally, we would not install these headers at all; instead, you should
use public functions (in headers like `THTensor.h`, NOT `THTensor.hpp`)
to manipulate these structs.  However, there are a few places
in torch/csrc where we violate this abstraction.  They are marked with
a pointer to this note.  Each of those sites will have to be refactored
when we refactor the guts of THTensor and related structures.