mirror of
https://github.com/saymrwulf/pytorch.git
synced 2026-05-15 21:00:47 +00:00
Summary: Operations on `Variable`s (or `torch::Tensor`) usually return `at::Tensor`. This is usually fine, but the `AnyModule` used in the implementation of `torch::Sequential` is very picky about types, and does not understand implicit conversions like this. This means that `sequential.forward(at_tensor_that_is_actually_a_variable)` will fail unless you wrap `at_tensor_that_is_actually_a_variable` with `torch::Tensor`. This PR adds a special case to `AnyModule` that will convert an `at::Tensor` to `torch::Tensor` when the tensor is really a variable, and else just pass the `at::Tensor`. This is a nice little usability improvement for the often-used `Sequential` class. ebetica ezyang Closes https://github.com/pytorch/pytorch/pull/8968 Reviewed By: ezyang Differential Revision: D8670407 Pulled By: goldsborough fbshipit-source-id: 3635ed6ed28238f3900ce4a876d07f1b11713831 |
||
|---|---|---|
| .. | ||
| bottleneck | ||
| cpp/api | ||
| cpp_extensions | ||
| data | ||
| error_messages | ||
| expect | ||
| ffi/src | ||
| onnx | ||
| optim | ||
| common.py | ||
| common_cuda.py | ||
| common_nn.py | ||
| run_test.py | ||
| test_autograd.py | ||
| test_c10d.py | ||
| test_cpp_extensions.py | ||
| test_cuda.py | ||
| test_dataloader.py | ||
| test_distributed.py | ||
| test_distributed_trap.py | ||
| test_distributions.py | ||
| test_indexing.py | ||
| test_jit.py | ||
| test_legacy_nn.py | ||
| test_multiprocessing.py | ||
| test_nccl.py | ||
| test_nn.py | ||
| test_optim.py | ||
| test_sparse.py | ||
| test_torch.py | ||
| test_utils.py | ||