pytorch/torch/_higher_order_ops
Bert Maher ae0f305bf9 [inductor] Make triton kernel autotune config defaults backward-compatible (#145494)
If a model was torch.packaged using triton<=3.1, any user-defined
autotuned kernels will have reps/warmups burned in with the old defaults
(100/25).  If this model is loaded with triton>=3.2, inductor's checks for
unsupported non-default autotune args will fail, because triton.Autotuner's
defaults for these parameters has changed to `None`.  Let's explicitly support
those values for backward compatibility with these older models.

Differential Revision: [D68561014](https://our.internmc.facebook.com/intern/diff/D68561014/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/145494
Approved by: https://github.com/aorenste
2025-01-29 00:31:39 +00:00
..
__init__.py [foreach_map] Initial foreach map HOP impl for inference (#142098) 2024-12-11 21:32:11 +00:00
associative_scan.py [associative_scan] scan dim handling in user-facing associative_scan() (#139864) 2025-01-28 23:58:10 +00:00
auto_functionalize.py PEP585 update - torch/_higher_order_ops torch/_subclasses torch/backends torch/compiler torch/cuda torch/masked torch/mtia torch/nested (#145202) 2025-01-20 22:37:26 +00:00
cond.py [cond] remove warning for unsupported tuple returns (#145766) 2025-01-28 03:13:36 +00:00
effects.py PEP585 update - torch/_higher_order_ops torch/_subclasses torch/backends torch/compiler torch/cuda torch/masked torch/mtia torch/nested (#145202) 2025-01-20 22:37:26 +00:00
executorch_call_delegate.py [hop] require hops to override __call__. (#134352) 2024-08-28 19:56:40 +00:00
flex_attention.py [hop][be] add utils for more comprehensive input alias and mutation (#145298) 2025-01-23 18:12:28 +00:00
foreach_map.py PEP585 update - torch/_higher_order_ops torch/_subclasses torch/backends torch/compiler torch/cuda torch/masked torch/mtia torch/nested (#145202) 2025-01-20 22:37:26 +00:00
hints_wrap.py [hop][be] add utils for more comprehensive input alias and mutation (#145298) 2025-01-23 18:12:28 +00:00
invoke_subgraph.py PEP585 update - torch/_higher_order_ops torch/_subclasses torch/backends torch/compiler torch/cuda torch/masked torch/mtia torch/nested (#145202) 2025-01-20 22:37:26 +00:00
map.py [BE]: Apply PERF401 autofixes from ruff (#140980) 2024-11-20 17:52:07 +00:00
out_dtype.py [BE] typing for decorators - library (#138969) 2025-01-15 17:08:55 +00:00
prim_hop_base.py [BE] typing for decorators - library (#138969) 2025-01-15 17:08:55 +00:00
run_const_graph.py [export] Unify single and multiple return for hops (#143227) 2025-01-13 03:31:14 +00:00
scan.py [hop][be] add utils for more comprehensive input alias and mutation (#145298) 2025-01-23 18:12:28 +00:00
strict_mode.py [Dynamo] Ensure torch function modes are dispatched on builtin ops (#137117) 2024-10-09 02:29:40 +00:00
torchbind.py Remove unused Python variables in torch/[_-a]* (#133492) 2024-12-12 17:39:14 +00:00
triton_kernel_wrap.py [inductor] Make triton kernel autotune config defaults backward-compatible (#145494) 2025-01-29 00:31:39 +00:00
utils.py [BE][Ez]: FURB148 - remove useless enumerate calls (#145619) 2025-01-24 23:37:15 +00:00
while_loop.py [hop][be] add utils for more comprehensive input alias and mutation (#145298) 2025-01-23 18:12:28 +00:00
wrap.py Allow fx graph caching higher order operators (opt-in) (#135877) 2024-09-24 17:23:09 +00:00