pytorch/test/distributed
Wanchao Liang 01c1560d10 Back out "[shard] ShardedTensor Interface"
Summary:
Original commit changeset: 04ad48ae373e

Original Phabricator Diff: D35123200

(Note: this ignores all push blocking failures!)

Test Plan: wait for ci

Differential Revision: D36069386

fbshipit-source-id: dc3c2fa035a0fb942aab14d36937ec46f8391084
(cherry picked from commit ca21654c3457d84d64a9bb02a1a0f88b8a819811)
2022-05-02 22:07:42 +00:00
..
_shard Back out "[shard] ShardedTensor Interface" 2022-05-02 22:07:42 +00:00
algorithms [BE] move init_multigpu_helper to common_distributed (#67050) 2021-10-22 17:16:11 -07:00
bin Add test owner to distributed files starting with test_ (#66797) 2021-10-19 10:55:20 -07:00
elastic [torch][elastic] Make final agent barrier to shutdown properly 2022-04-15 20:29:05 +00:00
fsdp [FSDP] Relax exec order valid. to only fwd 2022-05-02 13:24:53 +00:00
launcher [torchelastic][1/n] Fix caffe2.test.distributed.launcher.api_test flaky tests (#68624) 2021-11-19 15:23:30 -08:00
nn/jit Have test classes extend from common_utils.TestCase, not unittest.TestCase (#66900) 2021-10-19 16:54:05 -07:00
optim Convert DDP parameters to ReplicatedTensor during forward pass. 2022-04-18 03:27:23 +00:00
pipeline/sync [skip ci] set more tests with owners for distributed and elastic (#67583) 2021-11-01 12:26:03 -07:00
rpc Add test owner to distributed files starting with test_ (#66797) 2021-10-19 10:55:20 -07:00
argparse_util_test.py [skip ci] set more tests with owners for distributed and elastic (#67583) 2021-11-01 12:26:03 -07:00
test_c10d_common.py Fix SyncBatchNorm for empty inputs (#74944) 2022-04-01 23:48:30 +00:00
test_c10d_gloo.py ROCm: unskip c10 gloo tests 2022-04-25 14:28:56 +00:00
test_c10d_nccl.py Use batched operations for PowerSGD 2022-04-21 03:25:09 +00:00
test_c10d_spawn.py [PyTorch][Distributed] Enable Reduce Scatter and modify all_to_all for sharded linear with more test cases. (#68786) 2021-12-06 13:38:58 -08:00
test_c10d_spawn_gloo.py [PyTorch][Distributed] Enable Reduce Scatter and modify all_to_all for sharded linear with more test cases. (#68786) 2021-12-06 13:38:58 -08:00
test_c10d_spawn_nccl.py [PyTorch][Distributed] Enable Reduce Scatter and modify all_to_all for sharded linear with more test cases. (#68786) 2021-12-06 13:38:58 -08:00
test_data_parallel.py no longer coalesce sparse COO tensors before comparison (#69751) 2022-02-17 02:33:08 +00:00
test_distributed_spawn.py Add test owner to distributed files starting with test_ (#66797) 2021-10-19 10:55:20 -07:00
test_launcher.py Add test owner to distributed files starting with test_ (#66797) 2021-10-19 10:55:20 -07:00
test_nccl.py [NCCL] Patch bfloat16 support (#67843) 2021-11-09 13:46:13 -08:00
test_pg_wrapper.py Add test owner to distributed files starting with test_ (#66797) 2021-10-19 10:55:20 -07:00
test_store.py c10d: retry dns lookup failures (#74641) 2022-03-24 19:51:09 +00:00