pytorch/benchmarks/dynamo/microbenchmarks
Oguz Ulgen dc55704b48 Rename cache limit to recompile limit in configs (#143709)
This PR renames every cache_limit to recompile_limit via sed.

Old config options are maintained via Config(alias='xyz')

Pull Request resolved: https://github.com/pytorch/pytorch/pull/143709
Approved by: https://github.com/jansel
2024-12-22 10:03:57 +00:00
..
operator_inp_logs
__init__.py
analyze_templates.py [BE] Format .ci/ / .github/ / benchmarks/ / functorch/ / tools/ / torchgen/ with ruff format (#132577) 2024-10-11 18:30:26 +00:00
bench_mm_fusion.py
benchmark_helper.py
cache_debug_microbenchmarks.py Add microbenchmark for FxGraphHashDetails.debug_lines (#137506) 2024-10-09 16:15:05 +00:00
cache_hit_microbenchmarks.py Add a microbechmark for cache read path (#137607) 2024-10-10 16:36:18 +00:00
dynamo_guard_eval.py Rename cache limit to recompile limit in configs (#143709) 2024-12-22 10:03:57 +00:00
dynamo_microbenchmarks.py
fx_microbenchmarks.py
inductor_bmm.py
inductor_cpu_atomic.py
inductor_mm.py
matmul_relu.py
microbench.py
model.py
operator_inp_utils.py
operatorbench.py [inductor] Benchmark Halide in operatorbench.py (#136809) 2024-09-28 19:26:04 +00:00
overheads.py
tensor_layout_mini_benchmark.py
utils.py