pytorch/test/cpp
johnlu db90533b9e Make JIT not assume that the device is CUDA. (#54238)
Summary:
Decouple the JIT argument spec and shape analysis with CUDA.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/54238

Reviewed By: ngimel

Differential Revision: D28802085

Pulled By: Krovatkin

fbshipit-source-id: 4068c9460cdec2d80733f001ca90ea3f5e6d3a7e
2021-06-03 22:21:27 -07:00
..
api Back out "[pytorch][PR] ENH Adds dtype to nn.functional.one_hot" (#59080) 2021-05-27 15:40:52 -07:00
common
dist_autograd Fix distributed autograd gradients synchronization (#57792) 2021-05-09 17:32:59 -07:00
jit Make JIT not assume that the device is CUDA. (#54238) 2021-06-03 22:21:27 -07:00
lite_interpreter_runtime [Pytorch Delegated Backend] Save function name in debug info (#57481) 2021-05-25 13:19:02 -07:00
rpc [reland] Make TP agent use streams from Future when sending response (#59212) 2021-06-02 05:46:05 -07:00
tensorexpr [NNC] Fix loopnest.cache_accesses for reduce ops (fixed #59002) (#59136) 2021-06-03 21:04:14 -07:00
__init__.py