mirror of
https://github.com/saymrwulf/pytorch.git
synced 2026-05-15 21:00:47 +00:00
Fixes #142058 ## Summary DTensor `convolution_backward` op throws exception when the input Tensor has `requires_grad=False` which happens if the conv layer is the first layer in the model. ATEN convolution_backward op Usually returns 3 Tensors (grad_input, grad_weight, grad_bias) and the `grad_input` is actually an Optional[Tensor] which can be `None` in the case mentioned above. However, the DTensor sharding propagation rule and corresponding TP conv backward implementation both assume that the `grad_input` would be existent. ## Fix allow the `grad_input` to be `None` for `convolution_backward` op. ## Test `pytest test/distributed/tensor/test_convolution_ops.py` ## Follow-up The current implementation of DTensor conv op also ignores `output_mask` and this may need further care. Pull Request resolved: https://github.com/pytorch/pytorch/pull/142278 Approved by: https://github.com/bdhirsh |
||
|---|---|---|
| .. | ||
| __init__.py | ||
| _common_rules.py | ||
| _conv_ops.py | ||
| _einsum_strategy.py | ||
| _embedding_ops.py | ||
| _experimental_ops.py | ||
| _math_ops.py | ||
| _matrix_ops.py | ||
| _pointwise_ops.py | ||
| _random_ops.py | ||
| _tensor_ops.py | ||
| _view_ops.py | ||
| utils.py | ||