From 950b48435696993b37d8d27bc3acfadf8a4fe732 Mon Sep 17 00:00:00 2001 From: leslie-fang-intel Date: Wed, 28 Feb 2024 10:14:34 +0800 Subject: [PATCH] skip three pyhpc models with dynamic shape test (#120599) As reported in https://github.com/pytorch/pytorch/issues/119434, `pyhpc_isoneutral_mixing`, `pyhpc_equation_of_state` and `pyhpc_turbulent_kinetic_energy` failed with dynamic shape testing, we propose to skip the dynamic batch size testing of these 3 models in this PR. * Error msg is ``` File "/localdisk/leslie/torch_inductor_community/pytorch/benchmarks/dynamo/common.py", line 3879, in run assert marked, f"nothing in example_inputs had a dim with {batch_size}" AssertionError: nothing in example_inputs had a dim with 1048576 ``` * Root Cause is * Benchmark code will only annotate the inputs' dim as dynamic when its size equals to batch size https://github.com/pytorch/pytorch/blob/c617e7b4076a5f968f5827040a07b013e45cd0c6/benchmarks/dynamo/common.py#L3867-L3871. If it fails to find any dim equals to batch size, above error throws. * However, for these 3 models, none of the inputs' dim will equal to input batch size since the [relationship of dim sizes](https://github.com/pytorch/benchmark/blob/26b85eadde28645c9b04b2d5a5b37f4d810b5100/torchbenchmark/models/pyhpc_equation_of_state/__init__.py#L12-L16) ``` shape = ( math.ceil(2 * size ** (1/3)), math.ceil(2 * size ** (1/3)), math.ceil(0.25 * size ** (1/3)), ) ``` * Another thing is `pyhpc_isoneutral_mixing`, `pyhpc_equation_of_state` can pass the dynamic batch size accuracy testing, because the batch size has been set to 4 in accuracy testing (https://github.com/pytorch/pytorch/blob/c617e7b4076a5f968f5827040a07b013e45cd0c6/benchmarks/dynamo/common.py#L3456) and `math.ceil(2 * size ** (1/3))` happens equaling to 4. * Since the dim sizes of input has above relationship, running the these models in dynamic shape, we may need to annotate `dim[0](s0) = dim[2](s1) * 8`, per the discussion in https://github.com/pytorch/pytorch/issues/117477#issuecomment-1897108756 @avikchaudhuri, looks like we are not expressible for this case. So, I think we may need to skip the dynamic batch size testing for these 3 models. Pull Request resolved: https://github.com/pytorch/pytorch/pull/120599 Approved by: https://github.com/jgong5, https://github.com/desertfire --- .../dynamic_aot_eager_torchbench_inference.csv | 4 ++++ .../dynamic_cpu_inductor_torchbench_inference.csv | 4 ++++ .../dynamic_inductor_torchbench_inference.csv | 4 ++++ benchmarks/dynamo/common.py | 3 +++ 4 files changed, 15 insertions(+) diff --git a/benchmarks/dynamo/ci_expected_accuracy/dynamic_aot_eager_torchbench_inference.csv b/benchmarks/dynamo/ci_expected_accuracy/dynamic_aot_eager_torchbench_inference.csv index 99208c4f068..f25b6a3d929 100644 --- a/benchmarks/dynamo/ci_expected_accuracy/dynamic_aot_eager_torchbench_inference.csv +++ b/benchmarks/dynamo/ci_expected_accuracy/dynamic_aot_eager_torchbench_inference.csv @@ -262,6 +262,10 @@ pyhpc_isoneutral_mixing,pass,0 +pyhpc_turbulent_kinetic_energy,pass,0 + + + pytorch_CycleGAN_and_pix2pix,pass,0 diff --git a/benchmarks/dynamo/ci_expected_accuracy/dynamic_cpu_inductor_torchbench_inference.csv b/benchmarks/dynamo/ci_expected_accuracy/dynamic_cpu_inductor_torchbench_inference.csv index 0dba6f54b3b..57de74c25c8 100644 --- a/benchmarks/dynamo/ci_expected_accuracy/dynamic_cpu_inductor_torchbench_inference.csv +++ b/benchmarks/dynamo/ci_expected_accuracy/dynamic_cpu_inductor_torchbench_inference.csv @@ -190,6 +190,10 @@ pyhpc_isoneutral_mixing,pass,0 +pyhpc_turbulent_kinetic_energy,pass,0 + + + pytorch_CycleGAN_and_pix2pix,pass,0 diff --git a/benchmarks/dynamo/ci_expected_accuracy/dynamic_inductor_torchbench_inference.csv b/benchmarks/dynamo/ci_expected_accuracy/dynamic_inductor_torchbench_inference.csv index 99208c4f068..f25b6a3d929 100644 --- a/benchmarks/dynamo/ci_expected_accuracy/dynamic_inductor_torchbench_inference.csv +++ b/benchmarks/dynamo/ci_expected_accuracy/dynamic_inductor_torchbench_inference.csv @@ -262,6 +262,10 @@ pyhpc_isoneutral_mixing,pass,0 +pyhpc_turbulent_kinetic_energy,pass,0 + + + pytorch_CycleGAN_and_pix2pix,pass,0 diff --git a/benchmarks/dynamo/common.py b/benchmarks/dynamo/common.py index a206ae55b85..22c0ac2cf50 100644 --- a/benchmarks/dynamo/common.py +++ b/benchmarks/dynamo/common.py @@ -135,6 +135,9 @@ CI_SKIP_DYNAMIC_BATCH_ONLY = { # We should be able to graphbreak there. "doctr_det_predictor", "dlrm", + "pyhpc_isoneutral_mixing", + "pyhpc_equation_of_state", + "pyhpc_turbulent_kinetic_energy", } # These models currently fail accuracy with eager Adam optimizer