pytorch/test/distributed
2023-10-05 23:49:41 +00:00
..
_composable Define the public API for torch.distributed.fsdp (#109922) 2023-09-28 02:15:58 +00:00
_shard [Test][ShardedTensor] Add test for corner case for chunk sharding spec (#109626) 2023-09-20 14:40:07 +00:00
_spmd Move has_triton to top level triton utils so that dynamo can also access (#109832) 2023-09-22 19:33:41 +00:00
_tensor [dtensor] add grad placements kwarg to to_local API (#110629) 2023-10-05 21:34:01 +00:00
_tools
algorithms
bin
checkpoint [state_dict][1/N] Implement the basic functions of distributed.checkpoint._state_dict (#105902) 2023-10-05 20:04:15 +00:00
elastic
fsdp [FSDP][optim_state_dict] Make the new optimizer allgather fusion work with fine-tuning models (#110540) 2023-10-05 15:17:10 +00:00
launcher
nn/jit
optim
pipeline/sync
rpc
tensor/parallel [3/N][2D] Enable training with new 2D flow (#110034) 2023-09-26 09:14:15 +00:00
argparse_util_test.py
test_c10d_common.py Add "cuda" to MPI backend capabilities (#109614) 2023-09-21 13:34:58 +00:00
test_c10d_gloo.py
test_c10d_logger.py
test_c10d_nccl.py
test_c10d_object_collectives.py
test_c10d_pypg.py
test_c10d_spawn.py
test_c10d_spawn_gloo.py
test_c10d_spawn_nccl.py
test_c10d_spawn_ucc.py
test_c10d_ucc.py
test_collective_utils.py
test_data_parallel.py Revert "Update custom Function preserve torch function when inputs returned as-is (#109825)" 2023-10-05 23:49:41 +00:00
test_distributed_spawn.py
test_dynamo_distributed.py Move has_triton to top level triton utils so that dynamo can also access (#109832) 2023-09-22 19:33:41 +00:00
test_fake_pg.py
test_functional_api.py Add functional collective all_to_all_single and support it in Inductor (#110195) 2023-10-05 23:11:51 +00:00
test_inductor_collectives.py Add functional collective all_to_all_single and support it in Inductor (#110195) 2023-10-05 23:11:51 +00:00
test_launcher.py
test_multi_threaded_pg.py
test_nccl.py
test_pg_wrapper.py
test_store.py [c10d] Add tests for usig libuv through init_process_group. (#108661) 2023-09-20 16:02:20 +00:00