pytorch/test/distributed
Shen Li 2a81e8b8f1 Let all_reduce_coalesced and all_gather_coalesced return Future objects (#64722)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64722

`all_reduce_coalesced` and `all_gather_coalesced` are never publicly
released in our API docs. So, I would assume the blast radius to be small.

The motivation for this change to allow implementing
`all_reduce_coalesced` and `all_gather_coalesced` by re-using `allreduce`
and `allgather` C++ cores and perform flatten and copy only on the Python
side. With that, we can then remove `all_reduce_coalesced` and
`all_gather_coalesced` from C++ ProcessGroup APIs. For the async mode,
the copy-back logic after the communication will need to be chained
as a callback on the returned Future and use the chained child Future
as the return value (otherwise, we will need to wrap the child Future
into another work handle). This PR tries to test if we can directly
return a Future without breaking tests and internal use cases. If yes,
it will make the consolidation a lot easier.

cc pietern mrshenli pritamdamania87 zhaojuanmao satgera rohan-varma gqchen aazzolini osalpekar jiayisuse agolynski SciPioneer H-Huang mrzzd cbalioglu gcramer23

Test Plan: Imported from OSS

Reviewed By: rohan-varma

Differential Revision: D30830994

Pulled By: mrshenli

fbshipit-source-id: dcde0ed9245e9e8fee357b3588b07d540a4b6318
2021-09-10 07:45:25 -07:00
..
_sharded_tensor Fix bug in ShardedTensorMetadata serde. (#63902) 2021-08-31 20:31:14 -07:00
_sharding_spec Add driver function to run test_sharded_tensor.py and test_sharding_spec.py (#63189) 2021-08-16 15:25:32 -07:00
algorithms [DDP Comm Hook] Create a noop hook for performance debugging (#64344) 2021-09-01 17:36:22 -07:00
bin
elastic [8/N] Remove c10d/ddp fork tests. (#63454) 2021-08-20 12:23:18 -07:00
launcher (torch.distributed) Add torch.distributed.is_torchelastic_launched() util method + make init_method=tcp:// compatible with torchelastic (#63910) 2021-08-25 22:57:43 -07:00
nn/jit
optim Remove req to call step() in training loop (#63164) 2021-08-13 08:22:44 -07:00
pipeline/sync
rpc [7/N] Remove fork tests for RPC. (#63443) 2021-08-19 11:22:40 -07:00
argparse_util_test.py
test_c10d_common.py [8/N] Remove c10d/ddp fork tests. (#63454) 2021-08-20 12:23:18 -07:00
test_c10d_gloo.py Let all_reduce_coalesced and all_gather_coalesced return Future objects (#64722) 2021-09-10 07:45:25 -07:00
test_c10d_nccl.py [c10d] Provide failure reason from ProcessGroup when aborting NCCL comm (#64241) 2021-09-08 09:19:24 -07:00
test_c10d_spawn.py
test_c10d_spawn_gloo.py [8/N] Remove c10d/ddp fork tests. (#63454) 2021-08-20 12:23:18 -07:00
test_c10d_spawn_nccl.py
test_data_parallel.py
test_distributed_spawn.py [4/N] Enable opt-asan for distributed unit tests. (#62051) 2021-08-10 22:38:31 -07:00
test_jit_c10d.py [8/N] Remove c10d/ddp fork tests. (#63454) 2021-08-20 12:23:18 -07:00
test_launcher.py (torch.distributed) Add torch.distributed.is_torchelastic_launched() util method + make init_method=tcp:// compatible with torchelastic (#63910) 2021-08-25 22:57:43 -07:00
test_nccl.py
test_pg_wrapper.py [8/N] Remove c10d/ddp fork tests. (#63454) 2021-08-20 12:23:18 -07:00
test_store.py