pytorch/torch/distributed
Mikayla Gawarecki f3f305ef3e Fix condition for weights_only unpickler for DTensor (#140740)
Same as #140739 but for DTensor (move safe globals for DTensor to `torch.distributed.tensor.__init__` and update error message to let user know `torch.distributed.tensor` must be imported to load DTensor)

Differential Revision: [D65961690](https://our.internmc.facebook.com/intern/diff/D65961690)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/140740
Approved by: https://github.com/malfet
ghstack dependencies: #140739
2024-11-19 02:44:53 +00:00
..
_composable [FSDP2] privateuse1 support fsdp2. (#139539) 2024-11-15 06:34:35 +00:00
_shard [C10D] Support group_dst in scatter/gather (+object) ops (#140827) 2024-11-17 22:19:58 +00:00
_sharded_tensor
_sharding_spec
_symmetric_memory [SymmetricMemory] introduce user-facing APIs empty() and rendezvous() (#139677) 2024-11-17 20:51:50 +00:00
_tensor
_tools ILP for auto FSDP wrapping (#140298) 2024-11-11 22:02:39 +00:00
algorithms [C10D] support group_src/dst in broadcast/reduce ops (#140843) 2024-11-19 01:23:08 +00:00
autograd
benchmarks
checkpoint Revert "Fix the use of fsspec transactions (#135541)" 2024-11-07 17:03:37 +00:00
elastic [Torch Elastic] Fix the bug caused by wrong host address in creating TCPStore server inside dynamic rendezvous (#139702) 2024-11-05 15:28:03 +00:00
examples
fsdp Replace clone.detach with detach.clone (#140264) 2024-11-13 07:01:02 +00:00
launcher
nn
optim Fix unintended deprecation warning in torch.distributed.optim (#140889) 2024-11-18 02:34:51 +00:00
pipelining [pipelining] clean up stage functions (#140418) 2024-11-12 21:42:08 +00:00
rpc
tensor Fix condition for weights_only unpickler for DTensor (#140740) 2024-11-19 02:44:53 +00:00
__init__.py
_checkpointable.py
_composable_state.py [FSDP2] Make module-to-state mapping use weakrefs (#139650) 2024-11-05 02:16:52 +00:00
_functional_collectives.py [aotd] coerce_same_metadata_as_tangent with expected_type for e.g.AsyncCollectiveTensor (#139095) 2024-11-07 16:24:48 +00:00
_functional_collectives_impl.py
_state_dict_utils.py
argparse_util.py
c10d_logger.py [c10d][Logging] Remove args and kwargs from c10d logging (#140169) 2024-11-09 13:57:32 +00:00
collective_utils.py
constants.py
CONTRIBUTING.md
device_mesh.py [DeviceMesh] fix sub mesh size calculation in create_sub_mesh() (#138945) 2024-10-29 17:56:56 +00:00
distributed_c10d.py [C10D] Support group_dst/group_src in c10d send/recv object_list (#140847) 2024-11-19 01:23:08 +00:00
launch.py
logging_handlers.py
remote_device.py
rendezvous.py
run.py [BE]: Use proper logger in torch.distributed.run (#140547) 2024-11-14 14:49:17 +00:00
utils.py