mirror of
https://github.com/saymrwulf/pytorch.git
synced 2026-05-14 20:57:59 +00:00
This PR: - adds pytree.register_constant for registering a class to be treated as a constant by torch.compile/torch.fx - adds a very barebones flat_apply HOP. This should be sufficient to get mark_traceable working. A lot more work is necessary to get the custom operator case working (when make_fx sees a custom operator with PyTree arg types, it needs to emit a call to the flat_apply HOP). - I expect the flat_apply HOP to change a lot, I want to ship this in the current state to unblock the mark_traceable and custom ops work. Test Plan: - It's kind of difficult to test the barebones flat_apply HOP "works" so I added a really simple test. Pull Request resolved: https://github.com/pytorch/pytorch/pull/146060 Approved by: https://github.com/StrongerXi, https://github.com/yanboliang ghstack dependencies: #146059 |
||
|---|---|---|
| .. | ||
| __init__.py | ||
| aoti_call_delegate.py | ||
| associative_scan.py | ||
| auto_functionalize.py | ||
| cond.py | ||
| effects.py | ||
| executorch_call_delegate.py | ||
| flat_apply.py | ||
| flex_attention.py | ||
| foreach_map.py | ||
| hints_wrap.py | ||
| invoke_subgraph.py | ||
| map.py | ||
| out_dtype.py | ||
| prim_hop_base.py | ||
| run_const_graph.py | ||
| scan.py | ||
| strict_mode.py | ||
| torchbind.py | ||
| triton_kernel_wrap.py | ||
| utils.py | ||
| while_loop.py | ||
| wrap.py | ||