pytorch/test/distributed/_composable
Anant Gulati 9091096d6c Refactoring Distributed test cases to be device agnostic [1/n] (#145222)
In this series of PR we intend to refactoring distributed test cases to enable to be completely device agnostic.

These changes will include the following approaches to do the same :

- Allowing for multiple device types using instantiate_device_type_test
- Replacing calls to cuda stream with torch.get_device_module(device) wherever it applies
- Skipping set up steps required while using MultiProcessTestCase with DistributedTestBase (#138216) wherever applicable
- Replacing explicit calls to distributed backend (NCCL,HCCL,etc) with get_default_backend_for_device (#140536).

This should result in significant improvement in usability for all devices

Pull Request resolved: https://github.com/pytorch/pytorch/pull/145222
Approved by: https://github.com/kwen2501
2025-02-05 18:47:09 +00:00
..
fsdp pickler for GraphModule (#141659) 2025-01-31 05:34:28 +00:00
test_composability PEP585 update - test (#145176) 2025-01-22 04:48:28 +00:00
test_checkpoint.py PEP585 update - test (#145176) 2025-01-22 04:48:28 +00:00
test_contract.py PEP585 update - test (#145176) 2025-01-22 04:48:28 +00:00
test_replicate.py
test_replicate_with_compiler.py Refactoring Distributed test cases to be device agnostic [1/n] (#145222) 2025-02-05 18:47:09 +00:00