mirror of
https://github.com/saymrwulf/pytorch.git
synced 2026-05-14 20:57:59 +00:00
Replace None grad_inputs with zero tensors in some cases In Python-implemented autograd functions, we sometimes return None as the grad_input if the output is marked "non-differentiable". This replaces those None values with zero-filled Variables if the corresponding input has requires_grad=True. C++ implemented autograd functions expect the input (grad_outputs) to be defined if they're executed. They always return non-null grad_inputs if should_compute_output(i) is true. This could lead to segfaults if a subsequent Python-implemented function returned None. See #3412, #3241 |
||
|---|---|---|
| .. | ||
| data | ||
| error_messages | ||
| expect | ||
| ffi/src | ||
| optim | ||
| common.py | ||
| common_nn.py | ||
| run_test.sh | ||
| test_autograd.py | ||
| test_cuda.py | ||
| test_dataloader.py | ||
| test_distributed.py | ||
| test_distributions.py | ||
| test_jit.py | ||
| test_legacy_nn.py | ||
| test_multiprocessing.py | ||
| test_nccl.py | ||
| test_nn.py | ||
| test_optim.py | ||
| test_potrf.py | ||
| test_sparse.py | ||
| test_torch.py | ||
| test_utils.py | ||