pytorch/benchmarks/dynamo
Aaron Orenstein 07669ed960 PEP585 update - benchmarks tools torchgen (#145101)
This is one of a series of PRs to update us to PEP585 (changing Dict -> dict, List -> list, etc).  Most of the PRs were completely automated with RUFF as follows:

Since RUFF UP006 is considered an "unsafe" fix first we need to enable unsafe fixes:

```
--- a/tools/linter/adapters/ruff_linter.py
+++ b/tools/linter/adapters/ruff_linter.py
@@ -313,6 +313,7 @@
                     "ruff",
                     "check",
                     "--fix-only",
+                    "--unsafe-fixes",
                     "--exit-zero",
                     *([f"--config={config}"] if config else []),
                     "--stdin-filename",
```

Then we need to tell RUFF to allow UP006 (as a final PR once all of these have landed this will be made permanent):

```
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -40,7 +40,7 @@

 [tool.ruff]
-target-version = "py38"
+target-version = "py39"
 line-length = 88
 src = ["caffe2", "torch", "torchgen", "functorch", "test"]

@@ -87,7 +87,6 @@
     "SIM116", # Disable Use a dictionary instead of consecutive `if` statements
     "SIM117",
     "SIM118",
-    "UP006", # keep-runtime-typing
     "UP007", # keep-runtime-typing
 ]
 select = [
```

Finally running `lintrunner -a --take RUFF` will fix up the deprecated uses.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/145101
Approved by: https://github.com/bobrenjc93
2025-01-18 05:05:07 +00:00
..
ci_expected_accuracy Update ci_expected_accuracy for TIMM levit_128 for further investigation (#145112) 2025-01-18 01:55:34 +00:00
microbenchmarks PEP585 update - benchmarks tools torchgen (#145101) 2025-01-18 05:05:07 +00:00
pr_time_benchmarks basic InductorBenchmarker (#133058) 2025-01-18 02:35:00 +00:00
__init__.py
all_torchbench_models_list.txt
benchmarks.py PEP585 update - benchmarks tools torchgen (#145101) 2025-01-18 05:05:07 +00:00
check_accuracy.py Fix unused Python variables outside torch/ and test/ (#136359) 2024-12-11 17:10:23 +00:00
check_csv.py
check_graph_breaks.py Fix unused Python variables outside torch/ and test/ (#136359) 2024-12-11 17:10:23 +00:00
check_memory_compression_ratio.py
check_perf_csv.py [AOTI] Turn on the ABI-compatible mode as default (#136534) 2024-10-13 14:42:58 +00:00
combine_csv.py
common.py PEP585 update - benchmarks tools torchgen (#145101) 2025-01-18 05:05:07 +00:00
dist_util.py Fix unused Python variables outside torch/ and test/ (#136359) 2024-12-11 17:10:23 +00:00
distributed.py
expected_ci_perf_inductor_torchbench.csv
expected_ci_speedup_inductor_torchbench_cpu.csv [AOTI] Add a boxed_run API (#142213) 2025-01-14 18:47:42 +00:00
huggingface.py Enable autograd cache on inductor tests (#140890) 2024-11-27 20:41:43 +00:00
huggingface.yaml change GPT2ForSequenceClassification inference accuracy tolerance (#136749) 2024-10-12 01:12:28 +00:00
huggingface_models_list.txt
huggingface_models_list_cpu.txt
join_results.py [BE] Format .ci/ / .github/ / benchmarks/ / functorch/ / tools/ / torchgen/ with ruff format (#132577) 2024-10-11 18:30:26 +00:00
Makefile
parse_logs.py
README.md
run_all.sh
run_delta.sh
runner.py [BE] fix ruff rule E226: add missing whitespace around operator in f-strings (#144415) 2025-01-08 21:55:00 +00:00
summarize_perf.py
test.py
timm_models.py Add convnext_base to higher tolerance (#142159) 2024-12-06 04:00:13 +00:00
timm_models_list.txt
timm_models_list_cpu.txt
torchao_backend.py Rename cache limit to recompile limit in configs (#143709) 2024-12-22 10:03:57 +00:00
torchbench.py Fix unused Python variables outside torch/ and test/ (#136359) 2024-12-11 17:10:23 +00:00
torchbench.yaml
torchbench_models_list.txt
torchbench_models_list_cpu.txt
training_loss.py [BE] fix ruff rule E226: add missing whitespace around operator in f-strings (#144415) 2025-01-08 21:55:00 +00:00

torch.compile() Benchmarking

This directory contains benchmarking code for TorchDynamo and many backends including TorchInductor. It includes three main benchmark suites:

  • TorchBenchmark: A diverse set of models, initially seeded from highly cited research models as ranked by Papers With Code. See torchbench installation and torchbench.py for the low-level runner. Makefile also contains the commands needed to setup TorchBenchmark to match the versions used in PyTorch CI.

  • Models from HuggingFace: Primarily transformer models, with representative models chosen for each category available. The low-level runner (huggingface.py) automatically downloads and installs the needed dependencies on first run.

  • Models from TIMM: Primarily vision models, with representative models chosen for each category available. The low-level runner (timm_models.py) automatically downloads and installs the needed dependencies on first run.

GPU Performance Dashboard

Daily results from the benchmarks here are available in the TorchInductor Performance Dashboard, currently run on an NVIDIA A100 GPU.

The inductor-perf-test-nightly.yml workflow generates the data in the performance dashboard. If you have the needed permissions, you can benchmark your own branch on the PyTorch GitHub repo by:

  1. Select "Run workflow" in the top right of the workflow
  2. Select your branch you want to benchmark
  3. Choose the options (such as training vs inference)
  4. Click "Run workflow"
  5. Wait for the job to complete (4 to 12 hours depending on backlog)
  6. Go to the dashboard
  7. Select your branch and commit at the top of the dashboard

The dashboard compares two commits a "Base Commit" and a "New Commit". An entry such as 2.38x → 2.41x means that the performance improved from 2.38x in the base to 2.41x in the new commit. All performance results are normalized to eager mode PyTorch (1x), and higher is better.

CPU Performance Dashboard

The TorchInductor CPU Performance Dashboard is tracked on a GitHub issue and updated periodically.

Running Locally

Raw commands used to generate the data for the performance dashboards can be found here.

To summarize there are three scripts to run each set of benchmarks:

  • ./benchmarks/dynamo/torchbench.py ...
  • ./benchmarks/dynamo/huggingface.py ...
  • ./benchmarks/dynamo/timm_models.py ...

Each of these scripts takes the same set of arguments. The ones used by dashboards are:

  • --accuracy or --performance: selects between checking correctness and measuring speedup (both are run for dashboard).
  • --training or --inference: selects between measuring training or inference (both are run for dashboard).
  • --device=cuda or --device=cpu: selects device to measure.
  • --amp, --bfloat16, --float16, --float32: selects precision to use --amp is used for training and --bfloat16 for inference.
  • --cold-start-latency: disables caching to accurately measure compile times.
  • --backend=inductor: selects TorchInductor as the compiler backend to measure. Many more are available, see --help.
  • --output=<filename>.csv: where to write results to.
  • --dynamic-shapes --dynamic-batch-only: used when the dynamic config is enabled.
  • --disable-cudagraphs: used by configurations without cudagraphs enabled (default).
  • --freezing: enable additional inference-only optimizations.
  • --cpp-wrapper: enable C++ wrapper code to lower overheads.
  • TORCHINDUCTOR_MAX_AUTOTUNE=1 (environment variable): used to measure max-autotune mode, which is run weekly due to longer compile times.
  • --export-aot-inductor: benchmarks ahead-of-time compilation mode.
  • --total-partitions and --partition-id: used to parallel benchmarking across different machines.

For debugging you can run just a single benchmark by adding the --only=<NAME> flag.

A complete list of options can be seen by running each of the runners with the --help flag.

As an example, the commands to run first line of the dashboard (performance only) would be:

./benchmarks/dynamo/torchbench.py --performance --training --amp --backend=inductor --output=torchbench_training.csv
./benchmarks/dynamo/torchbench.py --performance --inference --bfloat16 --backend=inductor --output=torchbench_inference.csv

./benchmarks/dynamo/huggingface.py --performance --training --amp --backend=inductor --output=huggingface_training.csv
./benchmarks/dynamo/huggingface.py --performance --inference --bfloat16 --backend=inductor --output=huggingface_inference.csv

./benchmarks/dynamo/timm_models.py --performance --training --amp --backend=inductor --output=timm_models_training.csv
./benchmarks/dynamo/timm_models.py --performance --inference --bfloat16 --backend=inductor --output=timm_models_inference.csv