pytorch/test/distributed
Andrew Gu d087b32149 [BE][FSDP] Retire _get_full_detached_param() (#80871)
The tests did not actually require that the parameters be detached, so this coalesces `_get_full_detached_param()` with `get_full_params()`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/80871
Approved by: https://github.com/rohan-varma
2022-07-08 22:28:16 +00:00
..
_shard [shard] make state_dict hook be consistent 2022-06-17 22:08:06 +00:00
algorithms Enable test: distributed/algorithms/quantization/test_quantization (#80097) 2022-07-01 01:32:33 +00:00
bin
elastic [ci] remove IN_CI env var 2022-06-11 17:16:30 +00:00
fsdp [BE][FSDP] Retire _get_full_detached_param() (#80871) 2022-07-08 22:28:16 +00:00
launcher
nn/jit
optim [CUDA graphs] Allows Adam and AdamW to be capture-safe (#77862) 2022-06-13 01:56:47 +00:00
pipeline/sync Add all bzl files per D36874458 2022-06-06 09:40:19 -07:00
rpc [ci] remove IN_CI env var 2022-06-11 17:16:30 +00:00
argparse_util_test.py
defs.bzl Add all bzl files per D36874458 2022-06-06 09:40:19 -07:00
test_c10d_common.py
test_c10d_gloo.py
test_c10d_nccl.py [DDP] Fix broadcast for channels-last tensors (#79060) 2022-06-08 21:52:58 +00:00
test_c10d_object_collectives.py Revert "Revert "[distributed] Handle object collectives and NCCL. (#79034)"" 2022-06-15 10:04:37 -07:00
test_c10d_pypg.py [distributed] Make DDP work with python process group (#79176) 2022-06-28 17:14:21 +00:00
test_c10d_spawn.py
test_c10d_spawn_gloo.py
test_c10d_spawn_nccl.py Ensure tensors are contiguous in functional all_gather. 2022-06-17 01:27:11 +00:00
test_data_parallel.py
test_distributed_spawn.py
test_launcher.py
test_nccl.py
test_pg_wrapper.py Add _reduce_scatter_base to ProcessGroupWrapper. (#79633) 2022-06-29 15:32:42 +00:00
test_store.py