pytorch/test/distributed
2022-06-14 17:44:51 +00:00
..
_shard [PT-D] Use process group of the partial tensor so sub pg comm will be enabled during reshard 2022-06-14 17:44:51 +00:00
algorithms
bin
elastic [ci] remove IN_CI env var 2022-06-11 17:16:30 +00:00
fsdp Forward attributes to wrapped module 2022-06-14 01:13:33 +00:00
launcher
nn/jit
optim [CUDA graphs] Allows Adam and AdamW to be capture-safe (#77862) 2022-06-13 01:56:47 +00:00
pipeline/sync Add all bzl files per D36874458 2022-06-06 09:40:19 -07:00
rpc [ci] remove IN_CI env var 2022-06-11 17:16:30 +00:00
argparse_util_test.py
defs.bzl Add all bzl files per D36874458 2022-06-06 09:40:19 -07:00
test_c10d_common.py
test_c10d_gloo.py
test_c10d_nccl.py [DDP] Fix broadcast for channels-last tensors (#79060) 2022-06-08 21:52:58 +00:00
test_c10d_object_collectives.py [distributed] Handle object collectives and NCCL. (#79034) 2022-06-13 19:23:39 +00:00
test_c10d_spawn.py
test_c10d_spawn_gloo.py
test_c10d_spawn_nccl.py Use _all_gather_base and fuse matmul for sharded linear. 2022-06-01 17:17:34 +00:00
test_data_parallel.py
test_distributed_spawn.py
test_launcher.py
test_nccl.py
test_pg_wrapper.py
test_store.py [Bootcamp] Set default value of TCPStore world_size to None in pybind definition (#77277) 2022-05-12 18:48:48 +00:00