pytorch/test/distributed
KingsleyLiu-NV cd2798943d [dtensor] support convolution ops (#113123)
This PR creates a prototype of training convolutional neural networks based on DTensor.

- Register required ops and implement operator dispatch
- Add unit tests and example

Basically, we shard the activations and replicate the model weights in this prototype. We can scale out to multiple GPUs and reduce the per-GPU memory footprint with this approach, and achieve weak scaling in terms of training performance (i.e., time per iteration).

Reference log (on 2xA100 GPU):

Unit Test
```bash
root@luna-prod-78-80gb:/pytorch# python3 test/distributed/_tensor/test_convolution_ops.py
/opt/conda/lib/python3.10/site-packages/torch/nn/modules/conv.py:456: UserWarning: 0TORCH_NCCL_AVOID_RECORD_STREAMS=1 has no effect for point-to-point collectives. (Triggered internally at /opt/conda/conda-bld/pytorch_1699257304556/work/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:2170.)
  return F.conv2d(input, weight, bias, self.stride,
/opt/conda/lib/python3.10/site-packages/torch/nn/modules/conv.py:456: UserWarning: 0TORCH_NCCL_AVOID_RECORD_STREAMS=1 has no effect for point-to-point collectives. (Triggered internally at /opt/conda/conda-bld/pytorch_1699257304556/work/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:2170.)
  return F.conv2d(input, weight, bias, self.stride,
..
----------------------------------------------------------------------
Ran 2 tests in 30.354s

OK
root@luna-prod-78-80gb:/pytorch# python3 test/distributed/_tensor/test_other_ops.py
[rank0]:[W ProcessGroupNCCL.cpp:2170] Warning: 0TORCH_NCCL_AVOID_RECORD_STREAMS=1 has no effect for point-to-point collectives. (function operator())
[rank0]:[W ProcessGroupNCCL.cpp:2170] Warning: 0TORCH_NCCL_AVOID_RECORD_STREAMS=1 has no effect for point-to-point collectives. (function operator())
[rank1]:[W ProcessGroupNCCL.cpp:2170] Warning: 0TORCH_NCCL_AVOID_RECORD_STREAMS=1 has no effect for point-to-point collectives. (function operator())
[rank1]:[W ProcessGroupNCCL.cpp:2170] Warning: 0TORCH_NCCL_AVOID_RECORD_STREAMS=1 has no effect for point-to-point collectives. (function operator())
...
----------------------------------------------------------------------
Ran 3 tests in 16.343s

OK
```
ConvNeXt Example
```bash
root@luna-prod-78-80gb:/pytorch# python3 torch/distributed/_tensor/examples/convnext_example.py
rank 3, 20 iterations, latency     584.80 ms, forward     102.84 ms, backward     297.80 ms, max reserved    16.34 GiB, max allocated    14.75 GiB
rank 1, 20 iterations, latency     584.64 ms, forward     104.85 ms, backward     297.60 ms, max reserved    16.40 GiB, max allocated    14.74 GiB
rank 0, 20 iterations, latency     584.48 ms, forward     104.64 ms, backward     297.90 ms, max reserved    16.39 GiB, max allocated    14.75 GiB
rank 2, 20 iterations, latency     584.96 ms, forward      93.21 ms, backward     297.95 ms, max reserved    16.40 GiB, max allocated    14.74 GiB
```

@wanchaol @fduwjj FYI

Pull Request resolved: https://github.com/pytorch/pytorch/pull/113123
Approved by: https://github.com/wanchaol
2023-11-20 21:01:28 +00:00
..
_composable [PT-D] Made _get_registry return None if no APIs applied (#113654) 2023-11-14 20:28:11 +00:00
_shard Fix typo under test directory (#112346) 2023-11-03 07:53:33 +00:00
_spmd Fix typo under test directory (#112346) 2023-11-03 07:53:33 +00:00
_tensor [dtensor] support convolution ops (#113123) 2023-11-20 21:01:28 +00:00
_tools
algorithms
bin
checkpoint Improves comparison of state dicts for Checkpoint E2E Tests (#113181) 2023-11-15 20:48:45 +00:00
elastic [TorchElastic] Add option to configure log prefix for each rank (#112357) 2023-11-08 01:00:26 +00:00
fsdp fixes multiple GPU detected error for test_fsdp_fine_tune.py (#112406) 2023-11-18 02:07:18 +00:00
launcher
nn/jit
optim
pipeline/sync Fix typo under test directory (#112346) 2023-11-03 07:53:33 +00:00
rpc
tensor/parallel [2D] Remove enable_2d_with_fsdp() API and make remove_enable_2d_with_fsdp private (#112473) 2023-11-16 01:14:00 +00:00
argparse_util_test.py
test_c10d_common.py Fix typo under test directory (#112346) 2023-11-03 07:53:33 +00:00
test_c10d_functional_native.py [BE] Don't mutate torch.compile global config in tests (#113882) 2023-11-17 16:49:48 +00:00
test_c10d_gloo.py fix gloo cuda sparse_allreduce dispatch (#111485) 2023-10-19 21:15:45 +00:00
test_c10d_logger.py [c10d] add nccl version to c10d logger (#111215) 2023-10-16 18:47:49 +00:00
test_c10d_nccl.py [Reland] Fix default timeouts for python entrypoints (e.g. init_process_group) (#113094) 2023-11-07 05:34:26 +00:00
test_c10d_object_collectives.py
test_c10d_pypg.py
test_c10d_spawn.py
test_c10d_spawn_gloo.py
test_c10d_spawn_nccl.py
test_c10d_spawn_ucc.py
test_c10d_ucc.py
test_collective_utils.py
test_compute_comm_reordering.py Fix unit tests and add logging for Inductor intra-graph reordering (#111981) 2023-10-25 18:19:43 +00:00
test_data_parallel.py [reland2] Update custom Function preserve torch function when inputs … (#110895) 2023-10-11 21:37:19 +00:00
test_device_mesh.py [DeviceMesh] Remove _validate_mesh from device_mesh.py (#112928) 2023-11-04 05:12:27 +00:00
test_distributed_spawn.py Make test_distributed_spawn.py tell you how to run it correctly (#112924) 2023-11-04 02:43:43 +00:00
test_dynamo_distributed.py [BE] Don't mutate torch.compile global config in tests (#113882) 2023-11-17 16:49:48 +00:00
test_fake_pg.py [2D] Remove enable_2d_with_fsdp() API and make remove_enable_2d_with_fsdp private (#112473) 2023-11-16 01:14:00 +00:00
test_functional_api.py Make FakeProcessGroup traceable (#113314) 2023-11-10 16:03:38 +00:00
test_inductor_collectives.py [BE] Don't mutate torch.compile global config in tests (#113882) 2023-11-17 16:49:48 +00:00
test_launcher.py
test_multi_threaded_pg.py
test_nccl.py
test_pg_wrapper.py [Dist] Fix coalescing manager + DETAIL debug mode (#111878) 2023-10-24 07:47:39 +00:00
test_store.py Add timeout for master store if clients do not join (#111805) 2023-10-27 14:44:43 +00:00