pytorch/tools/codegen
Brian Hirsh 1a9efbbc92 generate inplace/out kernels for xla (#57510)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57510

This is a re-write of https://github.com/pytorch/pytorch/pull/56835, which is significantly shorter thanks to the data model change in the PR below this one in the stack. See the original description in the linked PR for details.

The functional changes in this PR are the same as in the above linked one, so the description is the same with a few small changes:
- I don't bother generating `at::xla::{op}` entries for CPU fallbacks. After looking around, I see precedent for that. For example, we don't have `at::cpu::{op}` entries for composite ops- if you really want to bypass the dispatcher you need to call `at::compositeimplicitautograd::{op}`. Maybe we should revisit that later if we find an important use case for having full namespace coverage, but that doesn't seem worth half-fixing for external backends in this PR.

Test Plan: Imported from OSS

Reviewed By: navahgar

Differential Revision: D28474364

Pulled By: bdhirsh

fbshipit-source-id: 4d58b60e5debad6f1ff06420597d8df8505b2876
2021-05-17 12:25:38 -07:00
..
api generate inplace/out kernels for xla (#57510) 2021-05-17 12:25:38 -07:00
dest generate inplace/out kernels for xla (#57510) 2021-05-17 12:25:38 -07:00
selective_build [PyTorch Edge] Eliminate non-determinism when generating build YAML file (#56539) 2021-04-20 17:26:14 -07:00
__init__.py
code_template.py
context.py [codegen] split out backend-specific information from NativeFunction in the model (#57361) 2021-05-17 12:25:35 -07:00
gen.py generate inplace/out kernels for xla (#57510) 2021-05-17 12:25:38 -07:00
gen_backend_stubs.py generate inplace/out kernels for xla (#57510) 2021-05-17 12:25:38 -07:00
local.py [PyTorch] Fix const correctness for resize native functions (#55351) 2021-04-21 14:51:41 -07:00
model.py generate inplace/out kernels for xla (#57510) 2021-05-17 12:25:38 -07:00
utils.py