pytorch/test/distributed/_tensor
wz337 bb67a28738 [DTensor] Enable Adamax foreach optimizer (#119850)
Enable Adamax foreach optimizer and add DTensor unit test for Adamax.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/119850
Approved by: https://github.com/wanchaol
2024-02-14 20:43:00 +00:00
..
debug
experimental [export] Remove torch._export.export (#119095) 2024-02-08 21:22:04 +00:00
__init__.py
README.md
test_api.py
test_common_rules.py
test_convolution_ops.py
test_dtensor.py [DTensor] Relaxed to_local requires_grad warning (#118186) 2024-01-25 15:49:32 +00:00
test_dtensor_compile.py fix compile DTensor.from_local in trace_rule_look up (#119659) 2024-02-13 05:21:19 +00:00
test_dtensor_ops.py [dtensor] switch softmax backward ops to OpStrategy (#119255) 2024-02-08 21:18:39 +00:00
test_embedding_ops.py [dtensor] implement dim-0 (row) embedding sharding with MaskPartial (#118080) 2024-01-26 19:01:24 +00:00
test_experimental_ops.py
test_init.py [DeviceMesh] Reuse sub_group pg if exists (#115716) 2024-01-25 18:07:16 +00:00
test_math_ops.py [dtensor] add op support for nll_loss_backward (#119256) 2024-02-14 18:50:42 +00:00
test_matrix_ops.py Fix admm over empty tensors and broadcastable input (#118619) 2024-01-31 05:40:25 +00:00
test_op_strategy.py
test_optimizers.py [DTensor] Enable Adamax foreach optimizer (#119850) 2024-02-14 20:43:00 +00:00
test_pointwise_ops.py
test_random_ops.py
test_redistribute.py [dtensor] refactor redistribute and fix uneven sharding redistribution (#115525) 2024-01-22 18:57:44 +00:00
test_tensor_ops.py [dtensor] add op support for aten.gather.default (#118513) 2024-02-02 01:48:21 +00:00
test_utils.py
test_view_ops.py
test_xla_integration.py

Run distributed tensor tests:

from root, run (either CPU or GPU)

pytest test/spmd/tensor/test_tensor.py

pytest test/spmd/tensor/test_ddp.py

run specific test case and print stdout/stderr:

pytest test/spmd/tensor/test_tensor.py -s -k test_tensor_from_local