pytorch/torch/distributed
Kasperi Apell a7915c56f6 Propagate callable parameter types using ParamSpec (#142306) (#143797)
The codebase has a few locations where callable parameter type information is lost when the unpackings *args and **kwargs are typed as Any. Refactor these instances to retain type information using typing_extensions.ParamSpec.

Also, in these functions, enforce return type with TypeVar.

Addresses #142306

Pull Request resolved: https://github.com/pytorch/pytorch/pull/143797
Approved by: https://github.com/Skylion007

Co-authored-by: Aaron Gokaslan <aaronGokaslan@gmail.com>
Co-authored-by: Xuehai Pan <XuehaiPan@outlook.com>
2024-12-29 23:03:14 +00:00
..
_composable
_shard Add support for other backends in get_preferred_device (#132118) 2024-12-16 18:30:41 +00:00
_sharded_tensor
_sharding_spec
_symmetric_memory [fused_all_gather_matmul] use _multimem_all_gather_matmul for small global Ms (#143160) 2024-12-17 01:07:27 +00:00
_tensor
_tools Propagate callable parameter types using ParamSpec (#142306) (#143797) 2024-12-29 23:03:14 +00:00
algorithms
autograd
benchmarks [BE][CI] bump ruff to 0.8.4 (#143753) 2024-12-24 12:24:10 +00:00
checkpoint [BE][CI] bump ruff to 0.8.4 (#143753) 2024-12-24 12:24:10 +00:00
elastic remove allow-untyped-defs from distributed/elastic/multiprocessing/subprocess_handler/handlers.py (#143917) 2024-12-28 00:13:05 +00:00
examples
fsdp Enable FSDP2 on XPU device (#143737) 2024-12-26 18:34:11 +00:00
launcher [BE] replace incorrect .. note:: invocations (#142868) 2024-12-11 19:58:18 +00:00
nn
optim [BE] replace incorrect .. note:: invocations (#142868) 2024-12-11 19:58:18 +00:00
pipelining remove allow-untyped-defs from distributed/pipelining/_unflatten.py (#143915) 2024-12-27 22:21:28 +00:00
rpc remove allow-untyped-defs for distributed/rpc/_testing/__init__.py (#143271) 2024-12-16 02:35:37 +00:00
tensor [DTensor] Add aten.amin/amax to linear_reduction_strategy (#143747) 2024-12-24 13:36:40 +00:00
__init__.py
_checkpointable.py
_composable_state.py
_functional_collectives.py
_functional_collectives_impl.py
_state_dict_utils.py [state dict] Change _load_model_state_dict to enable cpu_offload, accept 2 device type and optimize memory (#142845) 2024-12-19 05:06:41 +00:00
argparse_util.py
c10d_logger.py
collective_utils.py
constants.py
CONTRIBUTING.md
device_mesh.py Use new group instead of split group on non-CUDA device (#141469) 2024-12-13 05:11:33 +00:00
distributed_c10d.py
launch.py
logging_handlers.py
remote_device.py
rendezvous.py
run.py
utils.py