pytorch/test/custom_operator
Edward Yang eb71df3e63 Delete at::current_device(), Context::current_device() and Context::getNumGPUs() (#14414)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14414

The previous functions were CUDA-centric, and lead to lots of places
where we improperly assumed that CUDA is the only game in town (it's not).
Best to delete them.

What are your alternatives?  This diff fix some use sites which may give
you some ideas.  In particular, the "given a device type, give me the
current device for that device type" might be a good function to enshrine
for real.

Reviewed By: gchanan

Differential Revision: D13218540

fbshipit-source-id: 2f42cd6b9bdab4930d25166b8041c9466a1c6e0a
2018-12-03 10:54:52 -08:00
..
CMakeLists.txt Windows CI integration for custom ops (#12928) 2018-10-23 09:18:09 -07:00
model.py Allow building libraries with setuptools that dont have abi suffix (#14130) 2018-11-27 17:35:53 -08:00
op.cpp Add std::string to the getTypePtr for JIT inference of custom op types (#13683) 2018-11-10 12:58:53 -08:00
op.h Add std::string to the getTypePtr for JIT inference of custom op types (#13683) 2018-11-10 12:58:53 -08:00
test_custom_ops.cpp Delete at::current_device(), Context::current_device() and Context::getNumGPUs() (#14414) 2018-12-03 10:54:52 -08:00
test_custom_ops.py Add std::string to the getTypePtr for JIT inference of custom op types (#13683) 2018-11-10 12:58:53 -08:00