mirror of
https://github.com/saymrwulf/pytorch.git
synced 2026-05-14 20:57:59 +00:00
Fixes #142058 ## Summary DTensor `convolution_backward` op throws exception when the input Tensor has `requires_grad=False` which happens if the conv layer is the first layer in the model. ATEN convolution_backward op Usually returns 3 Tensors (grad_input, grad_weight, grad_bias) and the `grad_input` is actually an Optional[Tensor] which can be `None` in the case mentioned above. However, the DTensor sharding propagation rule and corresponding TP conv backward implementation both assume that the `grad_input` would be existent. ## Fix allow the `grad_input` to be `None` for `convolution_backward` op. ## Test `pytest test/distributed/tensor/test_convolution_ops.py` ## Follow-up The current implementation of DTensor conv op also ignores `output_mask` and this may need further care. Pull Request resolved: https://github.com/pytorch/pytorch/pull/142278 Approved by: https://github.com/bdhirsh |
||
|---|---|---|
| .. | ||
| debug | ||
| experimental | ||
| parallel | ||
| __init__.py | ||
| README.md | ||
| test_api.py | ||
| test_attention.py | ||
| test_common_rules.py | ||
| test_convolution_ops.py | ||
| test_dtensor.py | ||
| test_dtensor_compile.py | ||
| test_dtensor_ops.py | ||
| test_embedding_ops.py | ||
| test_experimental_ops.py | ||
| test_init.py | ||
| test_math_ops.py | ||
| test_matrix_ops.py | ||
| test_op_strategy.py | ||
| test_optimizers.py | ||
| test_pointwise_ops.py | ||
| test_random_ops.py | ||
| test_redistribute.py | ||
| test_tensor_ops.py | ||
| test_utils.py | ||
| test_view_ops.py | ||
| test_xla_integration.py | ||
Run distributed tensor tests:
from root, run (either CPU or GPU)
pytest test/distributed/tensor/test_dtensor.py
run specific test cases and print stdout/stderr:
pytest test/distributed/tensor/test_dtensor.py -s -k test_from_local