pytorch/test/cpp/rpc
Pritam Damania 40eea6d9d1 Support device map for distributed autograd while using TensorPipe. (#44859)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/44859

TensorPipe's `set_device_map` option was applied during the forward
pass. However, if we ran the backward pass for the graph we would not
automatically pick up the reverse device mapping.

As a result, users had to specify both forward and backward device mapping
which is very tedious to do.

In this PR, I've added this functionality such that TensorPipe automatically
picks up the reverse device mapping during the backward pass. This is done by
storing the appropriate device mapping in the "recv" autograd function for
distributed autograd.

#Closes: https://github.com/pytorch/pytorch/issues/44170
ghstack-source-id: 119950842

Test Plan:
1) waitforbuildbot
2) Unit test added.

Reviewed By: mrshenli

Differential Revision: D23751975

fbshipit-source-id: 2717d0ef5bde3db029a6172d98aad95734d52140
2021-01-27 13:01:44 -08:00
..
CMakeLists.txt
e2e_test_base.cpp
e2e_test_base.h Support device map for distributed autograd while using TensorPipe. (#44859) 2021-01-27 13:01:44 -08:00
test_e2e_process_group.cpp
test_e2e_tensorpipe.cpp Fix memory leak in TensorPipeAgent. (#50564) 2021-01-18 16:34:28 -08:00
test_tensorpipe_serialization.cpp
test_wire_serialization.cpp