pytorch/benchmarks/dynamo/pr_time_benchmarks
2025-02-08 18:00:40 +00:00
..
benchmarks partitioner: avoid inserting duplicates into heap (#145082) 2025-01-28 23:44:45 +00:00
test_check_result
__init__.py
benchmark_runner.sh [inductor] Minor compile time optimizations in DefaultHandler (#146282) 2025-02-08 18:00:40 +00:00
check_results.py Enhance running pr time benchmarks locally experience. (#144838) 2025-01-17 07:57:40 +00:00
expected_results.csv [inductor] Refactor op handlers part 4 (#146255) 2025-02-08 18:00:17 +00:00
log_benchmarking_time.py
README.md

Instructions on how to make a new compile time benchmark

  1. Make a new benchmark file in /benchmarks/dynamo/pr_time_benchmarks/benchmarks/ eg. 0b75b7ff2b/benchmarks/dynamo/pr_time_benchmarks/benchmarks/add_loop.py
  2. cd into the pr_time_benchmarks directory cd benchmarks/dynamo/pr_time_benchmarks
  3. Run PYTHONPATH=./ python benchmarks/[YOUR_BENCHMARK].py a.txt
  4. (Optional) flip a flag that you know will change the benchmark and run again with b.txt PYTHONPATH=./ python benchmarks/[YOUR_BENCHMARK].py a.txt
  5. Compare a.txt and b.txt located within the benchmarks/dynamo/pr_time_benchmarks folder to make sure things look as you expect
  6. Check in your new benchmark file and submit a new PR
  7. In a few days, if your benchmark is stable, bug Laith Sakka to enable running your benchmark on all PRs. If your a meta employee, you can find the dashboard here: internalfb.com/intern/unidash/dashboard/pt2_diff_time_metrics