pytorch/torch/_higher_order_ops
Michael Lazos b3f30c9bc3 [Dynamo] Move flex attention torch function mode to traceable HOP file (#137120)
Moves `TransformGetItemToIndex` to a file where dynamo stores other traceable HOP concepts.  (We don't trace through torch.* modules by default)

Tracing through the mode required fixing a bug in dynamo autograd function, which fixed a graph break, which caused the autograd test failures (skipping for now and will file an issue)

Previously those tests were in essence running in eager, because dynamo would fallback due to an arg mismatch error.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/137120
Approved by: https://github.com/yanboliang, https://github.com/malfet
ghstack dependencies: #137114, #137115, #137116, #137117
2024-10-09 02:29:40 +00:00
..
__init__.py [HOO] add hints_wrapper to support passing context hints (#132860) 2024-08-26 18:21:22 +00:00
associative_scan.py Implementation of scan (#134102) 2024-09-10 04:51:16 +00:00
auto_functionalize.py Add type annotations for higher order ops/flex_attention (#137065) 2024-10-02 04:39:25 +00:00
cond.py cond_batch_rule with boolean pred (#135009) 2024-10-03 07:43:30 +00:00
effects.py [effects] Turn off dtype promotion for with_effects lowering (#136039) 2024-09-16 16:14:05 +00:00
executorch_call_delegate.py [hop] require hops to override __call__. (#134352) 2024-08-28 19:56:40 +00:00
flex_attention.py [Dynamo] Move flex attention torch function mode to traceable HOP file (#137120) 2024-10-09 02:29:40 +00:00
hints_wrap.py [HOO] add hints_wrapper to support passing context hints (#132860) 2024-08-26 18:21:22 +00:00
map.py [hop] preserve metadata in re-tracing hop subgraph by running with interpreter (#135159) 2024-09-05 21:36:56 +00:00
out_dtype.py Make the __module__ name of HOO to be always "torch.ops.higher_order" (#132775) 2024-08-08 16:55:09 +00:00
run_const_graph.py [hop] require hops to override __call__. (#134352) 2024-08-28 19:56:40 +00:00
scan.py [Dynamo] Use custom backend to reenter metadata tf mode when tracing while/cond (#134732) 2024-09-14 18:52:22 +00:00
strict_mode.py [Dynamo] Ensure torch function modes are dispatched on builtin ops (#137117) 2024-10-09 02:29:40 +00:00
torchbind.py [hop] require hops to override __call__. (#134352) 2024-08-28 19:56:40 +00:00
triton_kernel_wrap.py Pass special arguments to user-defined Triton kernels if required (#137236) 2024-10-04 07:36:55 +00:00
utils.py Implementation of scan (#134102) 2024-09-10 04:51:16 +00:00
while_loop.py [Dynamo] Use custom backend to reenter metadata tf mode when tracing while/cond (#134732) 2024-09-14 18:52:22 +00:00
wrap.py Allow fx graph caching higher order operators (opt-in) (#135877) 2024-09-24 17:23:09 +00:00