pytorch/torch/_higher_order_ops
rzou 0f768c7866 Barebones flat_apply HOP (#146060)
This PR:
- adds pytree.register_constant for registering a class to be treated as
  a constant by torch.compile/torch.fx
- adds a very barebones flat_apply HOP. This should be sufficient to get
  mark_traceable working. A lot more work is necessary to get the custom
  operator case working (when make_fx sees a custom operator with PyTree
  arg types, it needs to emit a call to the flat_apply HOP).
- I expect the flat_apply HOP to change a lot, I want to ship this in
  the current state to unblock the mark_traceable and custom ops
  work.

Test Plan:
- It's kind of difficult to test the barebones flat_apply HOP "works" so
  I added a really simple test.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/146060
Approved by: https://github.com/StrongerXi, https://github.com/yanboliang
ghstack dependencies: #146059
2025-02-01 16:17:48 +00:00
..
__init__.py Barebones flat_apply HOP (#146060) 2025-02-01 16:17:48 +00:00
aoti_call_delegate.py Introduce aoti_call_delegate HOP (#145630) 2025-01-31 04:57:36 +00:00
associative_scan.py Require that all HOPs be imported at import torch time (#145939) 2025-01-29 22:27:52 +00:00
auto_functionalize.py PEP585 update - torch/_higher_order_ops torch/_subclasses torch/backends torch/compiler torch/cuda torch/masked torch/mtia torch/nested (#145202) 2025-01-20 22:37:26 +00:00
cond.py [cond] remove warning for unsupported tuple returns (#145766) 2025-01-28 03:13:36 +00:00
effects.py PEP585 update - torch/_higher_order_ops torch/_subclasses torch/backends torch/compiler torch/cuda torch/masked torch/mtia torch/nested (#145202) 2025-01-20 22:37:26 +00:00
executorch_call_delegate.py
flat_apply.py Barebones flat_apply HOP (#146060) 2025-02-01 16:17:48 +00:00
flex_attention.py [hop][be] add utils for more comprehensive input alias and mutation (#145298) 2025-01-23 18:12:28 +00:00
foreach_map.py PEP585 update - torch/_higher_order_ops torch/_subclasses torch/backends torch/compiler torch/cuda torch/masked torch/mtia torch/nested (#145202) 2025-01-20 22:37:26 +00:00
hints_wrap.py [hop][be] add utils for more comprehensive input alias and mutation (#145298) 2025-01-23 18:12:28 +00:00
invoke_subgraph.py PEP585 update - torch/_higher_order_ops torch/_subclasses torch/backends torch/compiler torch/cuda torch/masked torch/mtia torch/nested (#145202) 2025-01-20 22:37:26 +00:00
map.py
out_dtype.py [BE] typing for decorators - library (#138969) 2025-01-15 17:08:55 +00:00
prim_hop_base.py [BE] typing for decorators - library (#138969) 2025-01-15 17:08:55 +00:00
run_const_graph.py [export] Unify single and multiple return for hops (#143227) 2025-01-13 03:31:14 +00:00
scan.py [scan] scan dim handling in user-facing scan() (#145179) 2025-01-30 21:09:07 +00:00
strict_mode.py
torchbind.py Remove unused Python variables in torch/[_-a]* (#133492) 2024-12-12 17:39:14 +00:00
triton_kernel_wrap.py [inductor] Make triton kernel autotune config defaults backward-compatible (#145494) 2025-01-29 00:31:39 +00:00
utils.py [while_loop] specialize when cond_fn return constants (#144515) 2025-01-30 19:02:34 +00:00
while_loop.py [hop] fix unbacked_bindings meta for while_loop (#143559) 2025-01-30 21:33:09 +00:00
wrap.py Require that all HOPs be imported at import torch time (#145939) 2025-01-29 22:27:52 +00:00