pytorch/test/run_test.py

Ignoring revisions in .git-blame-ignore-revs. Click here to bypass and see the normal blame view.

2286 lines
78 KiB
Python
Raw Permalink Normal View History

2018-03-09 21:02:02 +00:00
#!/usr/bin/env python3
import argparse
import contextlib
import copy
import glob
import json
2018-03-09 21:02:02 +00:00
import os
import platform
import re
2018-03-09 21:02:02 +00:00
import shutil
import signal
2018-03-09 21:02:02 +00:00
import subprocess
import sys
import tempfile
import time
from collections import defaultdict
from collections.abc import Sequence
from contextlib import ExitStack
from datetime import datetime
from pathlib import Path
from typing import Any, cast, NamedTuple, Optional, Union
2018-03-09 21:02:02 +00:00
import pkg_resources
2018-03-09 21:02:02 +00:00
import torch
import torch.distributed as dist
from torch.multiprocessing import current_process, get_context
from torch.testing._internal.common_utils import (
Discover and run C++ tests with run_test.py (#99559) This depends on [pytest-cpp](https://github.com/pytest-dev/pytest-cpp) to discover and run C++ tests with pytest. C++ tests are built under `${WORKSPACE}/build/bin` directory and copied to the test job under the same path. * To expose them to `run_test`, I choose to use the mock path prefix `cpp`, for example `build/bin/c10_Array_test` would be named as `cpp/c10_Array_test` and the `python test/run_test.py --cpp -i cpp/c10_Array_test` would run the test in the same way as other Python tests. I could copy them from `build/bin` to `test/cpp`, but it will be mixed with the source code and CMake file. So this looks easier * Some executable under `build/bin` are not C++ tests, and they are exclude, for example `build/bin/torch_shm_manager` * C++ tests need to run with pytest directly as python command doesn't understand it * The change is gated by the new `--cpp` argument to `run_test.py`, for example `python test/run_test.py --cpp` will run all available C++ tests * The tests can be run in parallel * Failing tests can be retried with `--reruns=2` and `--sw` ``` ============================= test session starts ============================== platform darwin -- Python 3.9.15, pytest-7.2.0, pluggy-1.0.0 -- /Users/huydo/miniconda3/envs/py3.9/bin/python3 cachedir: .pytest_cache hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/Users/huydo/Storage/mine/pytorch/test/.hypothesis/examples') rootdir: /Users/huydo/Storage/mine/pytorch, configfile: pytest.ini plugins: xdoctest-1.1.0, cpp-2.3.0, rerunfailures-10.3, shard-0.1.2, flakefinder-1.1.0, hypothesis-6.56.4, xdist-3.0.2, repeat-0.9.1 collecting ... collected 3 items / 2 deselected / 1 selected Running 1 items in this shard: build/bin/scalar_tensor_test::TestScalarTensor.TestScalarTensorMPS stepwise: skipping 2 already passed items. ../build/bin/scalar_tensor_test::TestScalarTensor::TestScalarTensorMPS RERUN [100%] ../build/bin/scalar_tensor_test::TestScalarTensor::TestScalarTensorMPS RERUN [100%] ../build/bin/scalar_tensor_test::TestScalarTensor::TestScalarTensorMPS FAILED [100%] ``` * `--import-slow-tests` and `--import-disabled-tests` won't work for now and that's ok to have it as a future task. I also add `pytest-cpp==2.3.0` to Linux Docker, MacOS, and Windows. ### Testing Build PyTorch and run `python test/run_test.py --cpp` on my laptop. CI change would come later in a separate PR. Also running `python test/run_test.py --help` now shows all C++ test discovered under `build/bin` Pull Request resolved: https://github.com/pytorch/pytorch/pull/99559 Approved by: https://github.com/clee2000
2023-04-22 00:23:31 +00:00
get_report_path,
IS_CI,
IS_MACOS,
IS_WINDOWS,
retry_shell,
set_cwd,
shell,
TEST_CUDA,
Do not collect and skip non-disabled tests when rerunning disabled tests (#102107) The console log blows up to much when running in rerun disabled tests mode (x50) https://hud.pytorch.org/pytorch/pytorch/commit/e132f09e8878418fb98a4b76a441a324452354ec. Each log is around 1GB and the whole uncompressed logs is ~50GB. After compression, it will be around 1GB, still too big. The increase comes mainly from the multiple SKIPPED message for non-disabled tests, which is expected due to how SkipTest and pytest-flakyfinder currently work. I update `test/conftest.py` to completely ignore skipped tests when rerunning disabled test instead of collecting then skipping 50 tests each. The benefit of doing is is much more than I originally expect: * Rerun disabled tests jobs now finish in less than half an hour as they should be * Fix OOM runner crash because of too many collected tests * Fix verbosity issue as now only disabled tests are run x50 times. There are only few hundreds of them atm * Fix timed out issue when rerunning disabled distributed and ASAN tests. They are just too slow when running at x50 ### Testing When rerunning disabled tests https://github.com/pytorch/pytorch/actions/runs/5084508614, only disabled tests on the platform are run, for example `test_ops_jit` on https://ossci-raw-job-status.s3.amazonaws.com/log/13770164954 only ran 100 tests (`test_variant_consistency_jit_linalg_lu_cuda_float32` + `test_variant_consistency_jit_linalg_lu_factor_cuda_complex64`) x50. ``` Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'test_ops_jit.py', '--shard-id=1', '--num-shards=2', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '--sc=test_ops_jit_1', '--flake-finder', '--flake-runs=50', '--import-slow-tests', '--import-disabled-tests', '--rerun-disabled-tests'] ... [2023-05-25 21:32:49.763856] Expand the folded group to see the log file of test_ops_jit 2/2 ##[group]PRINTING LOG FILE of test_ops_jit 2/2 (/var/lib/jenkins/workspace/test/test-reports/test_ops_jit_h2wr_t2c.log) Test results will be stored in test-reports/python-pytest/test_ops_jit/test_ops_jit-51a83bd44549074e.xml ============================= test session starts ============================== platform linux -- Python 3.10.11, pytest-7.3.1, pluggy-1.0.0 -- /opt/conda/envs/py_3.10/bin/python cachedir: .pytest_cache hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow] rootdir: /var/lib/jenkins/workspace configfile: pytest.ini plugins: hypothesis-5.35.1, cpp-2.3.0, flakefinder-1.1.0, rerunfailures-11.1.2, shard-0.1.2, xdist-3.3.0, xdoctest-1.1.0 collecting ... collected 1084 items Running 100 items in this shard: test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_linalg_lu_cuda_float32 (x50), test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_linalg_lu_factor_cuda_complex64 (x50) stepcurrent: Cannot find last run test, not skipping test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_linalg_lu_cuda_float32 PASSED [2.1876s] [ 1%] test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_linalg_lu_factor_cuda_complex64 PASSED [4.5615s] [ 2%] ``` * [pull](https://github.com/pytorch/pytorch/actions/runs/5093566864) * [trunk](https://github.com/pytorch/pytorch/actions/runs/5095364311) * [periodic](https://github.com/pytorch/pytorch/actions/runs/5095378850) * [slow](https://github.com/pytorch/pytorch/actions/runs/5095390285) Pull Request resolved: https://github.com/pytorch/pytorch/pull/102107 Approved by: https://github.com/clee2000, https://github.com/malfet
2023-05-27 12:10:32 +00:00
TEST_WITH_ASAN,
TEST_WITH_CROSSREF,
TEST_WITH_ROCM,
TEST_WITH_SLOW_GRADCHECK,
)
# using tools/ to optimize test run.
REPO_ROOT = Path(__file__).resolve().parent.parent
sys.path.insert(0, str(REPO_ROOT))
from tools.stats.import_test_stats import (
ADDITIONAL_CI_FILES_FOLDER,
TEST_CLASS_TIMES_FILE,
TEST_TIMES_FILE,
)
from tools.stats.upload_metrics import add_global_metric, emit_metric
from tools.testing.discover_tests import (
CPP_TEST_PATH,
CPP_TEST_PREFIX,
CPP_TESTS_DIR,
parse_test_module,
TESTS,
)
from tools.testing.do_target_determination_for_s3 import import_results
from tools.testing.target_determination.gen_artifact import gen_ci_artifact
from tools.testing.target_determination.heuristics.previously_failed_in_pr import (
gen_additional_test_failures_file,
)
from tools.testing.target_determination.heuristics.utils import get_pr_number
from tools.testing.test_run import TestRun
from tools.testing.test_selections import (
calculate_shards,
get_test_case_configs,
NUM_PROCS,
ShardedTest,
THRESHOLD,
)
from tools.testing.upload_artifacts import zip_and_upload_artifacts
# Make sure to remove REPO_ROOT after import is done
sys.path.remove(str(REPO_ROOT))
2018-03-09 21:02:02 +00:00
HAVE_TEST_SELECTION_TOOLS = True
TEST_CONFIG = os.getenv("TEST_CONFIG", "")
BUILD_ENVIRONMENT = os.getenv("BUILD_ENVIRONMENT", "")
RERUN_DISABLED_TESTS = os.getenv("PYTORCH_TEST_RERUN_DISABLED_TESTS", "0") == "1"
Do not collect and skip non-disabled tests when rerunning disabled tests (#102107) The console log blows up to much when running in rerun disabled tests mode (x50) https://hud.pytorch.org/pytorch/pytorch/commit/e132f09e8878418fb98a4b76a441a324452354ec. Each log is around 1GB and the whole uncompressed logs is ~50GB. After compression, it will be around 1GB, still too big. The increase comes mainly from the multiple SKIPPED message for non-disabled tests, which is expected due to how SkipTest and pytest-flakyfinder currently work. I update `test/conftest.py` to completely ignore skipped tests when rerunning disabled test instead of collecting then skipping 50 tests each. The benefit of doing is is much more than I originally expect: * Rerun disabled tests jobs now finish in less than half an hour as they should be * Fix OOM runner crash because of too many collected tests * Fix verbosity issue as now only disabled tests are run x50 times. There are only few hundreds of them atm * Fix timed out issue when rerunning disabled distributed and ASAN tests. They are just too slow when running at x50 ### Testing When rerunning disabled tests https://github.com/pytorch/pytorch/actions/runs/5084508614, only disabled tests on the platform are run, for example `test_ops_jit` on https://ossci-raw-job-status.s3.amazonaws.com/log/13770164954 only ran 100 tests (`test_variant_consistency_jit_linalg_lu_cuda_float32` + `test_variant_consistency_jit_linalg_lu_factor_cuda_complex64`) x50. ``` Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'test_ops_jit.py', '--shard-id=1', '--num-shards=2', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '--sc=test_ops_jit_1', '--flake-finder', '--flake-runs=50', '--import-slow-tests', '--import-disabled-tests', '--rerun-disabled-tests'] ... [2023-05-25 21:32:49.763856] Expand the folded group to see the log file of test_ops_jit 2/2 ##[group]PRINTING LOG FILE of test_ops_jit 2/2 (/var/lib/jenkins/workspace/test/test-reports/test_ops_jit_h2wr_t2c.log) Test results will be stored in test-reports/python-pytest/test_ops_jit/test_ops_jit-51a83bd44549074e.xml ============================= test session starts ============================== platform linux -- Python 3.10.11, pytest-7.3.1, pluggy-1.0.0 -- /opt/conda/envs/py_3.10/bin/python cachedir: .pytest_cache hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow] rootdir: /var/lib/jenkins/workspace configfile: pytest.ini plugins: hypothesis-5.35.1, cpp-2.3.0, flakefinder-1.1.0, rerunfailures-11.1.2, shard-0.1.2, xdist-3.3.0, xdoctest-1.1.0 collecting ... collected 1084 items Running 100 items in this shard: test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_linalg_lu_cuda_float32 (x50), test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_linalg_lu_factor_cuda_complex64 (x50) stepcurrent: Cannot find last run test, not skipping test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_linalg_lu_cuda_float32 PASSED [2.1876s] [ 1%] test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_linalg_lu_factor_cuda_complex64 PASSED [4.5615s] [ 2%] ``` * [pull](https://github.com/pytorch/pytorch/actions/runs/5093566864) * [trunk](https://github.com/pytorch/pytorch/actions/runs/5095364311) * [periodic](https://github.com/pytorch/pytorch/actions/runs/5095378850) * [slow](https://github.com/pytorch/pytorch/actions/runs/5095390285) Pull Request resolved: https://github.com/pytorch/pytorch/pull/102107 Approved by: https://github.com/clee2000, https://github.com/malfet
2023-05-27 12:10:32 +00:00
DISTRIBUTED_TEST_PREFIX = "distributed"
INDUCTOR_TEST_PREFIX = "inductor"
IS_SLOW = "slow" in TEST_CONFIG or "slow" in BUILD_ENVIRONMENT
IS_S390X = platform.machine() == "s390x"
# Note [ROCm parallel CI testing]
# https://github.com/pytorch/pytorch/pull/85770 added file-granularity parallel testing.
# In .ci/pytorch/test.sh, TEST_CONFIG == "default", CUDA and HIP_VISIBLE_DEVICES is set to 0.
# This results in multiple test files sharing the same GPU.
# This should be a supported use case for ROCm, but it exposed issues in the kernel driver resulting in hangs.
# See https://github.com/pytorch/pytorch/issues/90940.
#
# Further, ROCm self-hosted runners have up to 4 GPUs.
# Device visibility was set to 0 to match CUDA test behavior, but this was wasting available GPU resources.
# Assigning each Pool worker their own dedicated GPU avoids the ROCm oversubscription issues.
# This should also result in better overall wall clock time since all GPUs can be utilized.
def maybe_set_hip_visible_devies():
# Special handling of ROCm GHA runners for parallel (file granularity) tests.
if torch.version.hip:
p = current_process()
if p.name != "MainProcess":
# this is a Process from a parallel Pool, not the MainProcess
os.environ["HIP_VISIBLE_DEVICES"] = str(p._identity[0] % NUM_PROCS)
def strtobool(s):
return s.lower() not in {"", "0", "false", "off"}
Run C++ tests on CI with run_test.py (#99956) After https://github.com/pytorch/pytorch/pull/99559, we can now run C++ test with `run_test.py`. Although advance features such as `--import-slow-tests` and `--import-disabled-tests` won't work for now, there will still be a gain in reliability and performance as C++ can now be retried and run in parallel. This covers all C++ tests in the CI including aten, libtorch, and Vulkan C++ tests across all platforms Linux, Windows, MacOS. Notes: * To support C++ test discovery, the env variable `CPP_TESTS_DIR` can be set to where the C++ test binaries is located * Support pytest -k argument via run_test as this is used by pytest-cpp to replace `--gtest-filter` * The XML output is in pytest format, but it's ok now because we don't have slow test or flaky test support for C++ test yet * ~~I need to figure out why conftest.py doesn't work when I invoke pytest directly for C++ test, so `--sc` is not available for C++ tests at the moment. Proper pytest plugin like stepwise works fine though. I'll investigate and fix it in a separate PR~~ Found the cause, `conftest.py` is per directory and needs to be in any arbitrary directory that holds C++ test * Two tests `test_api` and `test_tensorexpr` timed out on ASAN, I suspect that ASAN is now used on top of the python executable, which is slower than running native C++ code. IMO, it's ok to run these tests as before on ASAN for now Pull Request resolved: https://github.com/pytorch/pytorch/pull/99956 Approved by: https://github.com/clee2000, https://github.com/ZainRizvi
2023-05-09 21:24:12 +00:00
class TestChoices(list):
def __init__(self, *args, **kwargs):
super().__init__(args[0])
def __contains__(self, item):
return list.__contains__(self, parse_test_module(item))
FSDP_TEST = [test for test in TESTS if test.startswith("distributed/fsdp")]
WINDOWS_BLOCKLIST = [
"distributed/nn/jit/test_instantiator",
"distributed/rpc/test_faulty_agent",
"distributed/rpc/test_tensorpipe_agent",
"distributed/rpc/test_share_memory",
"distributed/rpc/cuda/test_tensorpipe_agent",
"distributed/pipeline/sync/skip/test_api",
"distributed/pipeline/sync/skip/test_gpipe",
"distributed/pipeline/sync/skip/test_inspect_skip_layout",
"distributed/pipeline/sync/skip/test_leak",
"distributed/pipeline/sync/skip/test_portal",
"distributed/pipeline/sync/skip/test_stash_pop",
"distributed/pipeline/sync/skip/test_tracker",
"distributed/pipeline/sync/skip/test_verify_skippables",
"distributed/pipeline/sync/test_balance",
"distributed/pipeline/sync/test_bugs",
"distributed/pipeline/sync/test_checkpoint",
"distributed/pipeline/sync/test_copy",
"distributed/pipeline/sync/test_deferred_batch_norm",
"distributed/pipeline/sync/test_dependency",
"distributed/pipeline/sync/test_inplace",
"distributed/pipeline/sync/test_microbatch",
"distributed/pipeline/sync/test_phony",
"distributed/pipeline/sync/test_pipe",
"distributed/pipeline/sync/test_pipeline",
"distributed/pipeline/sync/test_stream",
"distributed/pipeline/sync/test_transparency",
"distributed/pipeline/sync/test_worker",
"distributed/elastic/agent/server/test/api_test",
"distributed/elastic/multiprocessing/api_test",
"distributed/_shard/checkpoint/test_checkpoint"
"distributed/_shard/checkpoint/test_file_system_checkpoint"
"distributed/_shard/sharding_spec/test_sharding_spec",
"distributed/_shard/sharding_plan/test_sharding_plan",
"distributed/_shard/sharded_tensor/test_sharded_tensor",
"distributed/_shard/sharded_tensor/test_sharded_tensor_reshard",
"distributed/_shard/sharded_tensor/ops/test_embedding",
"distributed/_shard/sharded_tensor/ops/test_embedding_bag",
"distributed/_shard/sharded_tensor/ops/test_binary_cmp",
"distributed/_shard/sharded_tensor/ops/test_init",
"distributed/_shard/sharded_optim/test_sharded_optim",
] + FSDP_TEST
2018-03-09 21:02:02 +00:00
ROCM_BLOCKLIST = [
"distributed/rpc/test_faulty_agent",
"distributed/rpc/test_tensorpipe_agent",
"distributed/rpc/test_share_memory",
"distributed/rpc/cuda/test_tensorpipe_agent",
"distributed/_shard/checkpoint/test_checkpoint"
"distributed/_shard/checkpoint/test_file_system_checkpoint"
"distributed/_shard/sharding_spec/test_sharding_spec",
"distributed/_shard/sharded_tensor/ops/test_embedding",
"distributed/_shard/sharded_tensor/ops/test_embedding_bag",
"distributed/_shard/sharded_tensor/ops/test_binary_cmp",
"distributed/_shard/sharded_tensor/ops/test_init",
"distributed/_shard/sharded_optim/test_sharded_optim",
"test_determination",
"test_jit_legacy",
Extend torch.cuda.is_available() to attempt an NVML-based CUDA availability assessment when explicitly requested by the user (#85951) Fixes #83973 (This is a substitute PR for https://github.com/pytorch/pytorch/pull/85024) First of all, thanks for your invaluable contributions to PyTorch everyone! Given how extensively `torch.cuda.is_available` is used in the PyTorch ecosystem, IMHO it's worthwhile to provide downstream libraries/frameworks/users the ability to alter the default behavior of `torch.cuda.is_available` in the context of their PyTorch usage. I'm confident there are many current and future such use cases which could benefit from leveraging a weakened, NVML-based `torch.cuda.is_available` assessment at a downstream framework's explicit direction (thanks @malfet https://github.com/pytorch/pytorch/commit/81da50a972fc402a6dd880fe392af0f0051cb6de !). Though one could always patch out the `torch.cuda.is_available` function with another implementation in a downstream library, I think this environmental variable based configuration option is more convenient and the cost to including the option is quite low. As discussed in https://github.com/pytorch/pytorch/pull/85024#issuecomment-1261542045, this PR gates new non-default NVML-based CUDA behavior with an environmental variable (PYTORCH_NVML_BASED_CUDA_CHK) that allows a user/framework to invoke non-default, NVML-based `is_available()` assessments if desired. Thanks again for your work everyone! @ngimel @malfet @awaelchli Pull Request resolved: https://github.com/pytorch/pytorch/pull/85951 Approved by: https://github.com/ngimel
2022-10-12 18:37:50 +00:00
"test_cuda_nvml_based_avail",
"test_jit_cuda_fuser",
"distributed/tensor/test_attention",
]
# whitelist of tests for s390x
S390X_TESTLIST = [
"backends/xeon/test_launch.py",
"benchmark_utils/test_benchmark_utils.py",
"cpp/apply_utils_test",
"cpp/atest",
"cpp/basic",
"cpp/broadcast_test",
"cpp/cpu_generator_test",
"cpp/Dict_test",
"cpp/Dimname_test",
"cpp/dlconvertor_test",
"cpp/extension_backend_test",
"cpp/lazy_tensor_test",
"cpp/legacy_vmap_test",
"cpp/NamedTensor_test",
"cpp/native_test",
"cpp/operators_test",
"cpp/scalar_tensor_test",
"cpp/scalar_test",
"cpp/tensor_iterator_test",
"cpp/test_api",
"cpp/undefined_tensor_test",
"cpp/wrapdim_test",
"distributions/test_constraints",
"doctests",
"dynamo/test_activation_checkpointing",
"dynamo/test_after_aot",
"dynamo/test_aot_autograd",
"dynamo/test_aot_autograd_cache",
"dynamo/test_autograd_function",
"dynamo/test_backends",
"dynamo/test_backward_higher_order_ops",
"dynamo/test_base_output",
"dynamo/test_bytecode_utils",
"dynamo/test_callback",
"dynamo/test_compile",
"dynamo/test_comptime",
"dynamo/test_config",
"dynamo/test_ctx_manager",
"dynamo/test_cudagraphs",
"dynamo/test_cudagraphs_expandable_segments",
"dynamo/test_debug_utils",
"dynamo/test_decorators",
"dynamo/test_deviceguard",
"dynamo/test_export",
"dynamo/test_export_mutations",
"dynamo/test_frame_init",
"dynamo/test_fx_passes_pre_grad",
"dynamo/test_global",
"dynamo/test_guard_manager",
"dynamo/test_higher_order_ops",
"dynamo/test_hooks",
"dynamo/test_input_attr_tracking",
"dynamo/test_interop",
"dynamo/test_logging",
"dynamo/test_minifier",
"dynamo/test_model_output",
"dynamo/test_modes",
"dynamo/test_modules",
"dynamo/test_nops",
"dynamo/test_optimizers",
"dynamo/test_pre_dispatch",
"dynamo/test_profiler",
"dynamo/test_python_autograd",
"dynamo/test_recompiles",
"dynamo/test_recompile_ux",
"dynamo/test_reconstruct",
"dynamo/test_reorder_logs",
"dynamo/test_repros",
"dynamo/test_resume",
"dynamo/test_sdpa",
"dynamo/test_skip_non_tensor",
"dynamo/test_sources",
"dynamo/test_structured_trace",
"dynamo/test_subclasses",
"dynamo/test_subgraphs",
"dynamo/test_torchrec",
"dynamo/test_unspec",
"dynamo/test_utils",
"dynamo/test_verify_correctness",
"dynamo/test_view",
"export/test_db",
"export/test_experimental",
"export/test_export",
"export/test_export_nonstrict",
"export/test_export_training_ir_to_run_decomp",
"export/test_functionalized_assertions",
"export/test_hop",
"export/test_lift_unlift",
"export/test_passes",
"export/test_pass_infra",
"export/test_retraceability",
"export/test_schema",
"export/test_serdes",
"export/test_serialize",
"export/test_sparse",
"export/test_swap",
"export/test_tools",
"export/test_torchbind",
"export/test_tree_utils",
"export/test_unflatten",
"export/test_unflatten_training_ir",
"export/test_verifier",
"functorch/test_ac",
"functorch/test_control_flow",
"functorch/test_eager_transforms",
"functorch/test_logging",
"functorch/test_minifier",
"higher_order_ops/test_with_effects.py",
"inductor/test_auto_functionalize",
"inductor/test_autoheuristic",
"inductor/test_b2b_gemm",
"inductor/test_benchmarking",
"inductor/test_ck_backend",
"inductor/test_codecache",
"inductor/test_codegen_triton",
"inductor/test_combo_kernels",
"inductor/test_compiled_autograd",
"inductor/test_compiled_optimizers",
"inductor/test_compile_worker",
"inductor/test_config",
"inductor/test_control_flow",
"inductor/test_coordinate_descent_tuner",
"inductor/test_cpp_wrapper_hipify",
"inductor/test_cpu_cpp_wrapper",
"inductor/test_cudagraph_trees",
"inductor/test_cudagraph_trees_expandable_segments",
"inductor/test_cuda_repro",
"inductor/test_custom_lowering",
"inductor/test_cutlass_backend",
"inductor/test_debug_trace",
"inductor/test_decompose_mem_bound_mm",
"inductor/test_dependencies",
"inductor/test_distributed_patterns",
"inductor/test_efficient_conv_bn_eval",
"inductor/test_extension_backend",
"inductor/test_external_callables",
"inductor/test_flex_attention",
"inductor/test_flex_decoding",
"inductor/test_foreach",
"inductor/test_fp8",
"inductor/test_fx_fusion",
"inductor/test_graph_transform_observer",
"inductor/test_group_batch_fusion",
S390x ci periodic tests (#125401) Periodically run testsuite for s390x **Dependencies update** Package z3-solver is updated from version 4.12.2.0 to version 4.12.6.0. This is a minor version update, so no functional change is expected. The reason for update is build on s390x. pypi doesn't provide binary build for z3-solver for versions 4.12.2.0 or 4.12.6.0 for s390x. Unfortunately, version 4.12.2.0 fails to build with newer gcc used on s390x builders, but those errors are fixed in version 4.12.6.0. Due to this minor version bump fixes build on s390x. ``` # pip3 install z3-solver==4.12.2.0 ... In file included from /tmp/pip-install-756iytc6/z3-solver_ce6f750b780b4146a9a7c01e52672071/core/src/util/region.cpp:53: /tmp/pip-install-756iytc6/z3-solver_ce6f750b780b4146a9a7c01e52672071/core/src/util/region.cpp: In member function ‘void* region::allocate(size_t)’: /tmp/pip-install-756iytc6/z3-solver_ce6f750b780b4146a9a7c01e52672071/core/src/util/tptr.h:29:62: error: ‘uintptr_t’ does not name a type 29 | #define ALIGN(T, PTR) reinterpret_cast<T>(((reinterpret_cast<uintptr_t>(PTR) >> PTR_ALIGNMENT) + \ | ^~~~~~~~~ /tmp/pip-install-756iytc6/z3-solver_ce6f750b780b4146a9a7c01e52672071/core/src/util/region.cpp:82:22: note: in expansion of macro ‘ALIGN’ 82 | m_curr_ptr = ALIGN(char *, new_curr_ptr); | ^~~~~ /tmp/pip-install-756iytc6/z3-solver_ce6f750b780b4146a9a7c01e52672071/core/src/util/region.cpp:57:1: note: ‘uintptr_t’ is defined in header ‘<cstdint>’; did you forget to ‘#include <cstdint>’? 56 | #include "util/page.h" +++ |+#include <cstdint> 57 | ``` **Python paths update** On AlmaLinux 8 s390x, old paths: ``` python -c 'from distutils.sysconfig import get_python_lib; print(get_python_lib())' /usr/lib/python3.12/site-packages ``` Total result is `/usr/lib/python3.12/site-packages/torch;/usr/lib/python3.12/site-packages` New paths: ``` python -c 'import site; print(";".join([x for x in site.getsitepackages()] + [x + "/torch" for x in site.getsitepackages()]))' /usr/local/lib64/python3.12/site-packages;/usr/local/lib/python3.12/site-packages;/usr/lib64/python3.12/site-packages;/usr/lib/python3.12/site-packages;/usr/local/lib64/python3.12/site-packages/torch;/usr/local/lib/python3.12/site-packages/torch;/usr/lib64/python3.12/site-packages/torch;/usr/lib/python3.12/site-packages/torch ``` ``` # python -c 'import torch ; print(torch)' <module 'torch' from '/usr/local/lib64/python3.12/site-packages/torch/__init__.py'> ``` `pip3 install dist/*.whl` installs torch into `/usr/local/lib64/python3.12/site-packages`, and later it's not found by cmake with old paths: ``` CMake Error at CMakeLists.txt:9 (find_package): By not providing "FindTorch.cmake" in CMAKE_MODULE_PATH this project has asked CMake to find a package configuration file provided by "Torch", but CMake did not find one. ``` https://github.com/pytorch/pytorch/actions/runs/10994060107/job/30521868178?pr=125401 **Builders availability** Build took 60 minutes Tests took: 150, 110, 65, 55, 115, 85, 50, 70, 105, 110 minutes (split into 10 shards) 60 + 150 + 110 + 65 + 55 + 115 + 85 + 50 + 70 + 105 + 110 = 975 minutes used. Let's double it. It would be 1950 minutes. We have 20 machines * 24 hours = 20 * 24 * 60 = 20 * 1440 = 28800 minutes We currently run 5 nightly binaries builds, each on average 90 minutes build, 15 minutes test, 5 minutes upload, 110 minutes total for each, 550 minutes total. Doubling would be 1100 minutes. That leaves 28800 - 1100 = 27700 minutes total. Periodic tests would use will leave 25750 minutes. Nightly binaries build + nightly tests = 3050 minutes. 25750 / 3050 = 8.44. So we could do both 8 more times for additional CI runs for any reason. And that is with pretty good safety margin. **Skip test_tensorexpr** On s390x, pytorch is built without llvm. Even if it would be built with llvm, llvm currently doesn't support used features on s390x and test fails with errors like: ``` JIT session error: Unsupported target machine architecture in ELF object pytorch-jitted-objectbuffer unknown file: Failure C++ exception with description "valOrErr INTERNAL ASSERT FAILED at "/var/lib/jenkins/workspace/torch/csrc/jit/tensorexpr/llvm_jit.h":34, please report a bug to PyTorch. Unexpected failure in LLVM JIT: Failed to materialize symbols: { (main, { func }) } ``` **Disable cpp/static_runtime_test on s390x** Quantization is not fully supported on s390x in pytorch yet. Pull Request resolved: https://github.com/pytorch/pytorch/pull/125401 Approved by: https://github.com/malfet Co-authored-by: Nikita Shulga <2453524+malfet@users.noreply.github.com>
2025-01-10 18:21:07 +00:00
"inductor/test_gpu_cpp_wrapper",
"inductor/test_halide",
"inductor/test_indexing",
"inductor/test_inductor_freezing",
"inductor/test_loop_ordering",
"inductor/test_memory",
"inductor/test_memory_planning",
"inductor/test_metrics",
"inductor/test_minifier",
"inductor/test_minifier_isolate",
"inductor/test_mmdecomp",
"inductor/test_padding",
"inductor/test_pad_mm",
"inductor/test_profiler",
"inductor/test_scatter_optimization",
"inductor/test_smoke",
"inductor/test_standalone_compile",
"inductor/test_torchbind",
"inductor/test_triton_cpu_backend",
"inductor/test_triton_extension_backend",
"inductor/test_triton_heuristics",
"inductor/test_triton_kernels",
"inductor/test_utils",
"inductor/test_xpu_basic",
"lazy/test_bindings",
"lazy/test_debug_util",
"lazy/test_extract_compiled_graph",
"lazy/test_functionalization",
"lazy/test_generator",
"lazy/test_reuse_ir",
"lazy/test_step_closures",
"lazy/test_ts_opinfo",
"nn/test_convolution.py",
"nn/test_dropout.py",
"nn/test_embedding.py",
"nn/test_init.py",
"nn/test_lazy_modules.py",
"nn/test_load_state_dict.py",
"nn/test_module_hooks.py",
"nn/test_multihead_attention.py",
"nn/test_packed_sequence.py",
"nn/test_parametrization.py",
"nn/test_pooling.py",
"nn/test_pruning.py",
"optim/test_lrscheduler",
"optim/test_swa_utils",
"profiler/test_cpp_thread",
"profiler/test_execution_trace",
"profiler/test_memory_profiler",
"profiler/test_record_function",
"profiler/test_torch_tidy",
"test_autocast",
"test_autograd",
"test_autograd_fallback",
"test_autoload",
"test_autoload_disable",
"test_autoload_enable",
"test_bundled_inputs",
"test_comparison_utils",
"test_compile_benchmark_util",
"test_complex",
"test_content_store",
"test_cpp_api_parity",
"test_cpp_extensions_aot_ninja",
"test_cpp_extensions_aot_no_ninja",
"test_cpp_extensions_jit",
"test_cpp_extensions_mtia_backend",
"test_cpp_extensions_stream_and_event",
"test_cuda",
"test_cuda_expandable_segments",
"test_cuda_multigpu",
"test_cuda_nvml_based_avail",
"test_cuda_primary_ctx",
"test_cuda_sanitizer",
"test_cuda_trace",
"test_custom_ops",
"test_datapipe",
"test_deploy",
"test_dispatch",
"test_dlpack",
"test_dynamic_shapes",
"test_expanded_weights",
"test_fake_tensor",
"test_file_check",
"test_flop_counter",
"test_functionalization",
"test_functionalization_of_rng_ops",
"test_functional_optim",
"test_function_schema",
"test_futures",
"test_hub",
"test_import_stats",
"test_indexing",
"test_itt",
"test_legacy_vmap",
"test_logging",
"test_masked",
"test_maskedtensor",
"test_matmul_cuda",
"test_mkldnn",
"test_mkldnn_fusion",
"test_mkldnn_verbose",
"test_mkl_verbose",
"test_mobile_optimizer",
"test_module_tracker",
"test_monitor",
"test_namedtuple_return_api",
"test_native_mha",
"test_nestedtensor",
"test_numba_integration",
"test_numpy_interop",
"test_openmp",
"test_out_dtype_op",
"test_overrides",
"test_package",
"test_per_overload_api",
"test_prims",
"test_pruning_op",
"test_python_dispatch",
"test_scatter_gather_ops",
"test_segment_reductions",
"test_serialization",
"test_set_default_mobile_cpu_allocator",
"test_shape_ops",
"test_show_pickle",
"test_sort_and_select",
"test_spectral_ops",
"test_stateless",
"test_subclass",
"test_tensorboard",
"test_tensor_creation_ops",
"test_tensorexpr",
"test_tensorexpr_pybind",
"test_torch",
"test_transformers",
"test_type_hints",
"test_type_info",
"test_type_promotion",
"test_typing",
"test_utils",
"test_view_ops",
"test_vulkan",
"test_weak",
"test_xnnpack_integration",
"torch_np/numpy_tests/core/test_dlpack",
"torch_np/numpy_tests/core/test_dtype",
"torch_np/numpy_tests/core/test_einsum",
"torch_np/numpy_tests/core/test_getlimits",
"torch_np/numpy_tests/core/test_indexing",
"torch_np/numpy_tests/core/test_numeric",
"torch_np/numpy_tests/core/test_numerictypes",
"torch_np/numpy_tests/core/test_scalar_ctors",
"torch_np/numpy_tests/core/test_scalarinherit",
"torch_np/numpy_tests/core/test_scalarmath",
"torch_np/numpy_tests/core/test_scalar_methods",
"torch_np/numpy_tests/core/test_shape_base",
"torch_np/numpy_tests/fft/test_helper",
"torch_np/numpy_tests/fft/test_pocketfft",
"torch_np/numpy_tests/lib/test_arraypad",
"torch_np/numpy_tests/lib/test_arraysetops",
"torch_np/numpy_tests/lib/test_function_base",
"torch_np/numpy_tests/lib/test_histograms",
"torch_np/numpy_tests/lib/test_index_tricks",
"torch_np/numpy_tests/lib/test_shape_base_",
"torch_np/numpy_tests/lib/test_twodim_base",
"torch_np/numpy_tests/lib/test_type_check",
"torch_np/numpy_tests/linalg/test_linalg",
"torch_np/test_basic",
"torch_np/test_binary_ufuncs",
"torch_np/test_dtype",
"torch_np/test_function_base",
"torch_np/test_ndarray_methods",
"torch_np/test_nep50_examples",
"torch_np/test_random",
"torch_np/test_reductions",
"torch_np/test_scalars_0D_arrays",
"torch_np/test_ufuncs_basic",
"torch_np/test_unary_ufuncs",
"xpu/test_conv.py",
"xpu/test_gemm.py",
]
XPU_BLOCKLIST = [
"test_autograd",
"profiler/test_cpp_thread",
"profiler/test_execution_trace",
"profiler/test_memory_profiler",
"profiler/test_profiler",
"profiler/test_profiler_tree",
"profiler/test_record_function",
"profiler/test_torch_tidy",
]
XPU_TEST = [
"test_xpu",
]
# The tests inside these files should never be run in parallel with each other
RUN_PARALLEL_BLOCKLIST = [
"test_extension_utils",
"test_cpp_extensions_jit",
"test_cpp_extensions_open_device_registration",
"test_cpp_extensions_stream_and_event",
"test_cpp_extensions_mtia_backend",
"test_jit_disabled",
"test_mobile_optimizer",
"test_multiprocessing",
"test_multiprocessing_spawn",
"test_namedtuple_return_api",
"test_overrides",
"test_show_pickle",
"test_tensorexpr",
"test_cuda_primary_ctx",
"test_cuda_trace",
2024-03-19 01:35:34 +00:00
"inductor/test_benchmark_fusion",
Extend torch.cuda.is_available() to attempt an NVML-based CUDA availability assessment when explicitly requested by the user (#85951) Fixes #83973 (This is a substitute PR for https://github.com/pytorch/pytorch/pull/85024) First of all, thanks for your invaluable contributions to PyTorch everyone! Given how extensively `torch.cuda.is_available` is used in the PyTorch ecosystem, IMHO it's worthwhile to provide downstream libraries/frameworks/users the ability to alter the default behavior of `torch.cuda.is_available` in the context of their PyTorch usage. I'm confident there are many current and future such use cases which could benefit from leveraging a weakened, NVML-based `torch.cuda.is_available` assessment at a downstream framework's explicit direction (thanks @malfet https://github.com/pytorch/pytorch/commit/81da50a972fc402a6dd880fe392af0f0051cb6de !). Though one could always patch out the `torch.cuda.is_available` function with another implementation in a downstream library, I think this environmental variable based configuration option is more convenient and the cost to including the option is quite low. As discussed in https://github.com/pytorch/pytorch/pull/85024#issuecomment-1261542045, this PR gates new non-default NVML-based CUDA behavior with an environmental variable (PYTORCH_NVML_BASED_CUDA_CHK) that allows a user/framework to invoke non-default, NVML-based `is_available()` assessments if desired. Thanks again for your work everyone! @ngimel @malfet @awaelchli Pull Request resolved: https://github.com/pytorch/pytorch/pull/85951 Approved by: https://github.com/ngimel
2022-10-12 18:37:50 +00:00
"test_cuda_nvml_based_avail",
# temporarily sets a global config
"test_autograd_fallback",
Add compiler bisector (#131936) This is a utility to aid the torch.compile debugging. You provide a function that returns True on success, False on failure, or do something out of process and run bisect_helper `good | bad`. The bisector will first go through backends - `eager`, `aot_eager`, `aot_eager_decomp_partition`, `inductor` to find the first failing backend. Then, it will go through subsystems within the backend - currently limited but could be expanded - and try to find the first subsystem for which disabling fixes the problem. Once it has found the failing subsystem, it will find the number of times the subsystem is applied, and then bisect through it. An example usage of how to hook it up for aot_eager_decomp_partition and decomposition subsystem is : ``` from torch._inductor.bisect_helper import BisectionManager if op in CURRENT_DECOMPOSITION_TABLE: if BisectionManager.disable_subsystem("aot_eager_decomp_partition", "decomposition", lambda: repr(op)): return NotImplemented ``` Once it has discovered the problematic change, it will print out the associated debug info, and you can set the same limits with `TORCH_BISECT_BACKEND` `TORCH_BISECT_SUBSYSTEM` and `TORCH_BISECT_MAX`. We could add further options as an automated way of going through a check list for checking divergence - e.g., the mode to emulate amp casts. Fix for https://github.com/pytorch/pytorch/issues/126546 Pull Request resolved: https://github.com/pytorch/pytorch/pull/131936 Approved by: https://github.com/ezyang
2024-10-09 17:10:52 +00:00
"inductor/test_compiler_bisector",
] + FSDP_TEST
# Test files that should always be run serially with other test files,
# but it's okay if the tests inside them are run in parallel with each other.
CI_SERIAL_LIST = [
"test_nn",
"test_fake_tensor",
"test_cpp_api_parity",
"test_reductions",
"test_fx_backends",
"test_cpp_extensions_jit",
"test_torch",
"test_tensor_creation_ops",
"test_dispatch",
"test_python_dispatch", # torch.library creation and deletion must be serialized
"test_spectral_ops", # Cause CUDA illegal memory access https://github.com/pytorch/pytorch/issues/88916
"nn/test_pooling",
"nn/test_convolution", # Doesn't respect set_per_process_memory_fraction, results in OOM for other tests in slow gradcheck
"distributions/test_distributions",
"test_fx", # gets SIGKILL
"functorch/test_memory_efficient_fusion", # Cause CUDA OOM on ROCm
"test_utils", # OOM
"test_sort_and_select", # OOM
"test_backward_compatible_arguments", # OOM
"test_autocast", # OOM
"test_native_mha", # OOM
"test_module_hooks", # OOM
"inductor/test_max_autotune",
Add Lowering for FlexAttention Backwards (#125515) # Summary #### What does this PR do? It enables Inductor to actually generate the fused flex attention kernel for the backwards I did some other things along the way: - Abstract out the 'build_subgraph_buffer' subroutine and make it reusable between flex attention and flex_attention backwards. In total we need too build 3 subgraphs for fwd + bwd. 1 for the fwd graph and then 2 in the bwd. The FAv2 algorithm recomputes the parts of the forward (more efficiently since we already have the row_max via logsumexp), therefore we need to inline both the fwd graph and the joint graph in the bwds kernel. - The version of the backwards kernel is from a somewhat older version of the triton tutorial implementation. I think that we should update in a follow up to a newer version. Notably the blocks need to be square for this to work as currently implemented. I am sure there are many opportunities for optimization. - I didnt correctly register the decomp table + IndexMode when I landed: https://github.com/pytorch/pytorch/pull/123902, this remedies that. - The rel_bias helper func was reversed in terms of causality. I updated and then add a test specific for "future causal" attention. - This PRs but the main point that I think still needs to be worked out is the store_output call. I have it hacked up to be 'fake' but I dont think we want to land that and likely want to just have a mutated 'dq' and a stored_output 'dk' - I also needed to update the `TritonTemplateKernel` to actually accept multiple subgraphs (modifications) - I updated the benchmark to also profile bwds performance ### Benchmark Numbers: _The current implementation is not parallelizing over ctx length in the bwd_ FWD Speedups | Type | Speedup | shape | score_mod | dtype | |---------|-----------|--------------------|-------------|----------------| | Average | 0.991 | | | | | Max | 1.182 | (16, 16, 4096, 64) | noop | torch.bfloat16 | | Min | 0.796 | (2, 16, 512, 256) | head_bias | torch.bfloat16 | BWD Speedups | Type | Speedup | shape | score_mod | dtype | |---------|-----------|--------------------|-------------|----------------| | Average | 0.291 | | | | | Max | 0.652 | (8, 16, 512, 64) | head_bias | torch.bfloat16 | | Min | 0.073 | (2, 16, 4096, 128) | head_bias | torch.bfloat16 | <details> <summary>Full Data</summary> | shape | score_mod | dtype | fwd_eager_time | fwd_compiled_time | bwd_eager_time | bwd_compiled_time | fwd_speedup | bwd_speedup | |---------------------|---------------|----------------|------------------|---------------------|------------------|---------------------|---------------|---------------| | (2, 16, 512, 64) | noop | torch.bfloat16 | 19.936 | 19.092 | 57.851 | 193.564 | 1.044 | 0.299 | | (2, 16, 512, 64) | causal_mask | torch.bfloat16 | 19.955 | 19.497 | 57.662 | 206.278 | 1.024 | 0.280 | | (2, 16, 512, 64) | relative_bias | torch.bfloat16 | 19.455 | 21.297 | 57.674 | 195.219 | 0.913 | 0.295 | | (2, 16, 512, 64) | head_bias | torch.bfloat16 | 19.958 | 21.289 | 57.674 | 193.859 | 0.938 | 0.298 | | (2, 16, 512, 128) | noop | torch.bfloat16 | 28.157 | 28.615 | 82.831 | 454.211 | 0.984 | 0.182 | | (2, 16, 512, 128) | causal_mask | torch.bfloat16 | 28.154 | 28.444 | 83.091 | 432.083 | 0.990 | 0.192 | | (2, 16, 512, 128) | relative_bias | torch.bfloat16 | 28.722 | 27.897 | 83.175 | 446.789 | 1.030 | 0.186 | | (2, 16, 512, 128) | head_bias | torch.bfloat16 | 28.299 | 27.673 | 83.052 | 459.179 | 1.023 | 0.181 | | (2, 16, 512, 256) | noop | torch.bfloat16 | 41.167 | 50.504 | 175.019 | 1083.545 | 0.815 | 0.162 | | (2, 16, 512, 256) | causal_mask | torch.bfloat16 | 41.656 | 51.933 | 175.078 | 1171.176 | 0.802 | 0.149 | | (2, 16, 512, 256) | relative_bias | torch.bfloat16 | 41.697 | 50.722 | 175.159 | 1097.312 | 0.822 | 0.160 | | (2, 16, 512, 256) | head_bias | torch.bfloat16 | 41.690 | 52.387 | 175.184 | 1097.336 | 0.796 | 0.160 | | (2, 16, 1024, 64) | noop | torch.bfloat16 | 39.232 | 37.454 | 127.847 | 612.430 | 1.047 | 0.209 | | (2, 16, 1024, 64) | causal_mask | torch.bfloat16 | 39.930 | 39.599 | 127.755 | 665.359 | 1.008 | 0.192 | | (2, 16, 1024, 64) | relative_bias | torch.bfloat16 | 39.417 | 41.304 | 127.902 | 614.990 | 0.954 | 0.208 | | (2, 16, 1024, 64) | head_bias | torch.bfloat16 | 39.965 | 42.034 | 127.953 | 613.273 | 0.951 | 0.209 | | (2, 16, 1024, 128) | noop | torch.bfloat16 | 63.964 | 71.024 | 226.510 | 1637.669 | 0.901 | 0.138 | | (2, 16, 1024, 128) | causal_mask | torch.bfloat16 | 63.843 | 72.451 | 226.750 | 1558.949 | 0.881 | 0.145 | | (2, 16, 1024, 128) | relative_bias | torch.bfloat16 | 64.301 | 70.487 | 226.651 | 1610.063 | 0.912 | 0.141 | | (2, 16, 1024, 128) | head_bias | torch.bfloat16 | 64.033 | 71.394 | 226.676 | 1668.511 | 0.897 | 0.136 | | (2, 16, 1024, 256) | noop | torch.bfloat16 | 129.348 | 141.390 | 507.337 | 4405.175 | 0.915 | 0.115 | | (2, 16, 1024, 256) | causal_mask | torch.bfloat16 | 129.538 | 145.680 | 507.178 | 4768.874 | 0.889 | 0.106 | | (2, 16, 1024, 256) | relative_bias | torch.bfloat16 | 129.438 | 142.782 | 507.004 | 4401.002 | 0.907 | 0.115 | | (2, 16, 1024, 256) | head_bias | torch.bfloat16 | 129.058 | 146.242 | 507.547 | 4434.251 | 0.883 | 0.114 | | (2, 16, 4096, 64) | noop | torch.bfloat16 | 481.606 | 409.120 | 1440.890 | 14147.269 | 1.177 | 0.102 | | (2, 16, 4096, 64) | causal_mask | torch.bfloat16 | 480.227 | 438.847 | 1434.419 | 14973.386 | 1.094 | 0.096 | | (2, 16, 4096, 64) | relative_bias | torch.bfloat16 | 480.831 | 458.104 | 1432.935 | 14193.253 | 1.050 | 0.101 | | (2, 16, 4096, 64) | head_bias | torch.bfloat16 | 480.749 | 452.497 | 1437.040 | 14084.869 | 1.062 | 0.102 | | (2, 16, 4096, 128) | noop | torch.bfloat16 | 872.534 | 848.275 | 2600.895 | 35156.849 | 1.029 | 0.074 | | (2, 16, 4096, 128) | causal_mask | torch.bfloat16 | 872.647 | 868.279 | 2587.581 | 31919.531 | 1.005 | 0.081 | | (2, 16, 4096, 128) | relative_bias | torch.bfloat16 | 871.484 | 827.644 | 2593.989 | 34805.634 | 1.053 | 0.075 | | (2, 16, 4096, 128) | head_bias | torch.bfloat16 | 871.422 | 856.437 | 2602.482 | 35708.591 | 1.017 | 0.073 | | (2, 16, 4096, 256) | noop | torch.bfloat16 | 1904.497 | 1758.183 | 6122.416 | 66754.593 | 1.083 | 0.092 | | (2, 16, 4096, 256) | causal_mask | torch.bfloat16 | 1911.174 | 1762.821 | 6113.207 | 72759.392 | 1.084 | 0.084 | | (2, 16, 4096, 256) | relative_bias | torch.bfloat16 | 1911.254 | 1727.108 | 6123.530 | 66577.988 | 1.107 | 0.092 | | (2, 16, 4096, 256) | head_bias | torch.bfloat16 | 1916.977 | 1801.804 | 6118.158 | 67359.680 | 1.064 | 0.091 | | (8, 16, 512, 64) | noop | torch.bfloat16 | 44.984 | 43.974 | 170.276 | 262.259 | 1.023 | 0.649 | | (8, 16, 512, 64) | causal_mask | torch.bfloat16 | 45.001 | 46.265 | 170.509 | 274.893 | 0.973 | 0.620 | | (8, 16, 512, 64) | relative_bias | torch.bfloat16 | 45.466 | 48.211 | 170.606 | 262.759 | 0.943 | 0.649 | | (8, 16, 512, 64) | head_bias | torch.bfloat16 | 45.481 | 48.435 | 170.267 | 261.265 | 0.939 | 0.652 | | (8, 16, 512, 128) | noop | torch.bfloat16 | 72.565 | 74.736 | 313.220 | 773.126 | 0.971 | 0.405 | | (8, 16, 512, 128) | causal_mask | torch.bfloat16 | 72.015 | 75.755 | 313.311 | 775.513 | 0.951 | 0.404 | | (8, 16, 512, 128) | relative_bias | torch.bfloat16 | 72.105 | 74.189 | 313.806 | 769.238 | 0.972 | 0.408 | | (8, 16, 512, 128) | head_bias | torch.bfloat16 | 72.005 | 74.364 | 313.509 | 775.237 | 0.968 | 0.404 | | (8, 16, 512, 256) | noop | torch.bfloat16 | 138.656 | 165.453 | 663.707 | 2672.067 | 0.838 | 0.248 | | (8, 16, 512, 256) | causal_mask | torch.bfloat16 | 139.096 | 172.613 | 663.593 | 2926.538 | 0.806 | 0.227 | | (8, 16, 512, 256) | relative_bias | torch.bfloat16 | 139.500 | 168.417 | 663.938 | 2658.629 | 0.828 | 0.250 | | (8, 16, 512, 256) | head_bias | torch.bfloat16 | 139.776 | 173.549 | 662.920 | 2667.266 | 0.805 | 0.249 | | (8, 16, 1024, 64) | noop | torch.bfloat16 | 134.883 | 125.004 | 484.706 | 1195.254 | 1.079 | 0.406 | | (8, 16, 1024, 64) | causal_mask | torch.bfloat16 | 134.297 | 132.875 | 485.420 | 1234.953 | 1.011 | 0.393 | | (8, 16, 1024, 64) | relative_bias | torch.bfloat16 | 134.839 | 139.231 | 485.470 | 1198.556 | 0.968 | 0.405 | | (8, 16, 1024, 64) | head_bias | torch.bfloat16 | 133.822 | 136.449 | 485.608 | 1189.198 | 0.981 | 0.408 | | (8, 16, 1024, 128) | noop | torch.bfloat16 | 235.470 | 234.765 | 886.094 | 2662.944 | 1.003 | 0.333 | | (8, 16, 1024, 128) | causal_mask | torch.bfloat16 | 236.305 | 241.382 | 886.293 | 2646.984 | 0.979 | 0.335 | | (8, 16, 1024, 128) | relative_bias | torch.bfloat16 | 236.414 | 233.980 | 885.250 | 2642.178 | 1.010 | 0.335 | | (8, 16, 1024, 128) | head_bias | torch.bfloat16 | 237.176 | 239.040 | 885.754 | 2665.242 | 0.992 | 0.332 | | (8, 16, 1024, 256) | noop | torch.bfloat16 | 504.445 | 517.855 | 1978.956 | 9592.906 | 0.974 | 0.206 | | (8, 16, 1024, 256) | causal_mask | torch.bfloat16 | 502.428 | 536.002 | 1978.611 | 10607.342 | 0.937 | 0.187 | | (8, 16, 1024, 256) | relative_bias | torch.bfloat16 | 503.396 | 523.960 | 1977.993 | 9539.284 | 0.961 | 0.207 | | (8, 16, 1024, 256) | head_bias | torch.bfloat16 | 503.818 | 536.014 | 1980.131 | 9576.262 | 0.940 | 0.207 | | (8, 16, 4096, 64) | noop | torch.bfloat16 | 1970.139 | 1674.930 | 5750.940 | 16724.134 | 1.176 | 0.344 | | (8, 16, 4096, 64) | causal_mask | torch.bfloat16 | 1959.036 | 1775.056 | 5780.512 | 17390.350 | 1.104 | 0.332 | | (8, 16, 4096, 64) | relative_bias | torch.bfloat16 | 1947.198 | 1773.869 | 5780.643 | 16779.699 | 1.098 | 0.345 | | (8, 16, 4096, 64) | head_bias | torch.bfloat16 | 1963.935 | 1829.502 | 5780.018 | 16703.259 | 1.073 | 0.346 | | (8, 16, 4096, 128) | noop | torch.bfloat16 | 3582.711 | 3362.623 | 10436.069 | 36415.565 | 1.065 | 0.287 | | (8, 16, 4096, 128) | causal_mask | torch.bfloat16 | 3581.504 | 3499.472 | 10346.869 | 36164.959 | 1.023 | 0.286 | | (8, 16, 4096, 128) | relative_bias | torch.bfloat16 | 3589.779 | 3337.849 | 10529.621 | 36261.696 | 1.075 | 0.290 | | (8, 16, 4096, 128) | head_bias | torch.bfloat16 | 3602.265 | 3436.444 | 10458.660 | 36507.790 | 1.048 | 0.286 | | (8, 16, 4096, 256) | noop | torch.bfloat16 | 7695.923 | 7126.275 | 24643.009 | 140949.081 | 1.080 | 0.175 | | (8, 16, 4096, 256) | causal_mask | torch.bfloat16 | 7679.939 | 7186.252 | 24538.105 | 157156.067 | 1.069 | 0.156 | | (8, 16, 4096, 256) | relative_bias | torch.bfloat16 | 7681.374 | 6994.832 | 24549.713 | 140077.179 | 1.098 | 0.175 | | (8, 16, 4096, 256) | head_bias | torch.bfloat16 | 7679.822 | 7212.278 | 24627.823 | 140675.003 | 1.065 | 0.175 | | (16, 16, 512, 64) | noop | torch.bfloat16 | 80.126 | 78.291 | 333.719 | 541.165 | 1.023 | 0.617 | | (16, 16, 512, 64) | causal_mask | torch.bfloat16 | 80.065 | 81.696 | 333.779 | 551.113 | 0.980 | 0.606 | | (16, 16, 512, 64) | relative_bias | torch.bfloat16 | 80.138 | 86.715 | 333.364 | 542.118 | 0.924 | 0.615 | | (16, 16, 512, 64) | head_bias | torch.bfloat16 | 80.415 | 85.204 | 333.294 | 536.840 | 0.944 | 0.621 | | (16, 16, 512, 128) | noop | torch.bfloat16 | 134.964 | 138.025 | 607.093 | 1333.102 | 0.978 | 0.455 | | (16, 16, 512, 128) | causal_mask | torch.bfloat16 | 134.192 | 141.523 | 606.269 | 1424.318 | 0.948 | 0.426 | | (16, 16, 512, 128) | relative_bias | torch.bfloat16 | 135.711 | 138.639 | 606.283 | 1327.974 | 0.979 | 0.457 | | (16, 16, 512, 128) | head_bias | torch.bfloat16 | 135.552 | 140.555 | 607.107 | 1347.370 | 0.964 | 0.451 | | (16, 16, 512, 256) | noop | torch.bfloat16 | 275.113 | 315.144 | 1301.583 | 5268.153 | 0.873 | 0.247 | | (16, 16, 512, 256) | causal_mask | torch.bfloat16 | 274.867 | 328.106 | 1302.513 | 5770.594 | 0.838 | 0.226 | | (16, 16, 512, 256) | relative_bias | torch.bfloat16 | 276.052 | 321.770 | 1302.904 | 5241.920 | 0.858 | 0.249 | | (16, 16, 512, 256) | head_bias | torch.bfloat16 | 271.409 | 328.839 | 1302.142 | 5266.037 | 0.825 | 0.247 | | (16, 16, 1024, 64) | noop | torch.bfloat16 | 260.489 | 237.463 | 955.884 | 1817.558 | 1.097 | 0.526 | | (16, 16, 1024, 64) | causal_mask | torch.bfloat16 | 262.378 | 254.350 | 955.280 | 1843.807 | 1.032 | 0.518 | | (16, 16, 1024, 64) | relative_bias | torch.bfloat16 | 261.338 | 268.253 | 956.038 | 1820.036 | 0.974 | 0.525 | | (16, 16, 1024, 64) | head_bias | torch.bfloat16 | 262.153 | 264.156 | 956.023 | 1810.076 | 0.992 | 0.528 | | (16, 16, 1024, 128) | noop | torch.bfloat16 | 476.475 | 461.413 | 1760.578 | 4306.521 | 1.033 | 0.409 | | (16, 16, 1024, 128) | causal_mask | torch.bfloat16 | 473.794 | 479.178 | 1761.277 | 4619.439 | 0.989 | 0.381 | | (16, 16, 1024, 128) | relative_bias | torch.bfloat16 | 473.839 | 463.282 | 1758.692 | 4290.562 | 1.023 | 0.410 | | (16, 16, 1024, 128) | head_bias | torch.bfloat16 | 472.979 | 472.896 | 1763.086 | 4367.931 | 1.000 | 0.404 | | (16, 16, 1024, 256) | noop | torch.bfloat16 | 1014.184 | 1026.764 | 3922.997 | 19104.147 | 0.988 | 0.205 | | (16, 16, 1024, 256) | causal_mask | torch.bfloat16 | 1013.217 | 1039.046 | 3928.382 | 21086.281 | 0.975 | 0.186 | | (16, 16, 1024, 256) | relative_bias | torch.bfloat16 | 1008.519 | 1015.278 | 3922.133 | 18980.652 | 0.993 | 0.207 | | (16, 16, 1024, 256) | head_bias | torch.bfloat16 | 1011.360 | 1047.542 | 3931.245 | 19069.172 | 0.965 | 0.206 | | (16, 16, 4096, 64) | noop | torch.bfloat16 | 3929.850 | 3325.667 | 11411.704 | 23344.280 | 1.182 | 0.489 | | (16, 16, 4096, 64) | causal_mask | torch.bfloat16 | 3885.262 | 3581.544 | 11390.515 | 23725.639 | 1.085 | 0.480 | | (16, 16, 4096, 64) | relative_bias | torch.bfloat16 | 3865.737 | 3537.308 | 11489.901 | 23406.330 | 1.093 | 0.491 | | (16, 16, 4096, 64) | head_bias | torch.bfloat16 | 3880.530 | 3665.249 | 11484.411 | 23299.496 | 1.059 | 0.493 | | (16, 16, 4096, 128) | noop | torch.bfloat16 | 7030.306 | 6745.715 | 20621.264 | 57464.096 | 1.042 | 0.359 | | (16, 16, 4096, 128) | causal_mask | torch.bfloat16 | 7095.414 | 7034.385 | 20410.656 | 61660.511 | 1.009 | 0.331 | | (16, 16, 4096, 128) | relative_bias | torch.bfloat16 | 7084.779 | 6686.497 | 20315.161 | 57243.969 | 1.060 | 0.355 | | (16, 16, 4096, 128) | head_bias | torch.bfloat16 | 7075.367 | 6863.305 | 20494.385 | 58481.953 | 1.031 | 0.350 | | (16, 16, 4096, 256) | noop | torch.bfloat16 | 15612.741 | 14297.482 | 55306.847 | 281161.865 | 1.092 | 0.197 | | (16, 16, 4096, 256) | causal_mask | torch.bfloat16 | 15326.592 | 14263.878 | 55227.806 | 313063.232 | 1.075 | 0.176 | | (16, 16, 4096, 256) | relative_bias | torch.bfloat16 | 15297.963 | 14007.379 | 54558.029 | 279529.175 | 1.092 | 0.195 | | (16, 16, 4096, 256) | head_bias | torch.bfloat16 | 15216.160 | 14276.027 | 55081.581 | 280996.826 | 1.066 | 0.196 | </details> Pull Request resolved: https://github.com/pytorch/pytorch/pull/125515 Approved by: https://github.com/Chillee
2024-05-17 00:41:55 +00:00
"inductor/test_cutlass_backend", # slow due to many nvcc compilation steps,
"inductor/test_flex_attention", # OOM
]
[ONNX] Run ONNX tests as part of standard run_test script (#99215) <!-- copilot:all --> ### <samp>🤖 Generated by Copilot at dcbf7e2</samp> ### Summary 📝🧹🚩 <!-- 1. 📝 for simplifying the `./scripts/onnx/test.sh` script 2. 🧹 for refactoring the `test/onnx/dynamo/test_exporter_api.py` file 3. 🚩 for adding the `--onnx` flag to `test/run_test.py` and updating the `TESTS` list --> This pull request improves the ONNX testing infrastructure in PyTorch by refactoring the test code, normalizing the scope names, adding a flag to run only the ONNX tests, and simplifying the test script. > _To export PyTorch models to ONNX_ > _We refactored some scripts and contexts_ > _We used `common_utils`_ > _And normalized the scopes_ > _And added a flag to run the tests_ ### Walkthrough * Simplify `./scripts/onnx/test.sh` to use `run_test.py` with `--onnx` flag instead of `pytest` ([link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-0017f5b22ae1329acb0f54af8d9811c9b6180a72dac70d7a5b89d7c23c958198L44-R46)) * Remove `onnx` test from `TESTS` list in `test/run_test.py` ([link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-e72503c9e3e8766e2d1bacf3fad7b88aa166e0e90a7e103e7df99357a35df8d7L127-R127)). Replace with `onnx_caffe2`. * Add `onnx/test_pytorch_onnx_onnxruntime_cuda` and `onnx/test_models` tests to `blocklisted_tests` list in `test/run_test.py` ([link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-e72503c9e3e8766e2d1bacf3fad7b88aa166e0e90a7e103e7df99357a35df8d7R154-R155)) * Add `ONNX_SERIAL_LIST` list to `test/run_test.py` to specify ONNX tests that must run serially ([link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-e72503c9e3e8766e2d1bacf3fad7b88aa166e0e90a7e103e7df99357a35df8d7R296-R301)) * Add `ONNX_TESTS` list to `test/run_test.py` to store all ONNX tests ([link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-e72503c9e3e8766e2d1bacf3fad7b88aa166e0e90a7e103e7df99357a35df8d7R370)) * Add `--onnx` flag to `parse_args` function in `test/run_test.py` to run only ONNX tests ([link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-e72503c9e3e8766e2d1bacf3fad7b88aa166e0e90a7e103e7df99357a35df8d7R920-R928)) * Include `ONNX_SERIAL_LIST` in `must_serial` function in `test/run_test.py` to run ONNX tests serially or parallelly based on memory usage ([link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-e72503c9e3e8766e2d1bacf3fad7b88aa166e0e90a7e103e7df99357a35df8d7R1120)) * Filter selected tests based on `--onnx` flag in `get_selected_tests` function in `test/run_test.py` to exclude non-ONNX tests ([link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-e72503c9e3e8766e2d1bacf3fad7b88aa166e0e90a7e103e7df99357a35df8d7R1158-R1165)) ### Other minor changes to accommodate this change * Replace `unittest` module with `common_utils.TestCase` in `test/onnx/dynamo/test_exporter_api.py` ([link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-4545f0c15c73ebe90a875e9bee6c5ca4b6b92fb1ed0ec5560d1568e0f6339d02L4), [link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-4545f0c15c73ebe90a875e9bee6c5ca4b6b92fb1ed0ec5560d1568e0f6339d02L29-R28), [link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-4545f0c15c73ebe90a875e9bee6c5ca4b6b92fb1ed0ec5560d1568e0f6339d02L71-R70), [link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-4545f0c15c73ebe90a875e9bee6c5ca4b6b92fb1ed0ec5560d1568e0f6339d02L147-R146)) * Import `TemporaryFileName` class from `common_utils` in `test/onnx/dynamo/test_exporter_api.py` ([link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-4545f0c15c73ebe90a875e9bee6c5ca4b6b92fb1ed0ec5560d1568e0f6339d02L19-R18)) * Use `common_utils.TemporaryFileName` instead of `TemporaryFileName` in `TestDynamoExportAPI` class in `test/onnx/dynamo/test_exporter_api.py` ([link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-4545f0c15c73ebe90a875e9bee6c5ca4b6b92fb1ed0ec5560d1568e0f6339d02L92-R91), [link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-4545f0c15c73ebe90a875e9bee6c5ca4b6b92fb1ed0ec5560d1568e0f6339d02L110-R109), [link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-4545f0c15c73ebe90a875e9bee6c5ca4b6b92fb1ed0ec5560d1568e0f6339d02L129-R128)) * Use `common_utils.run_tests` instead of `unittest.main` in `test/onnx/dynamo/test_exporter_api.py` ([link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-4545f0c15c73ebe90a875e9bee6c5ca4b6b92fb1ed0ec5560d1568e0f6339d02L155-R154)) * Add `re` module to `test/onnx/test_utility_funs.py` ([link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-da71d2c81c9dc7ac0c47ff086fded82e4edcb67ba0cd3d8b5c983d7467343bc7R6)) * Add `_remove_test_environment_prefix_from_scope_name` function to `test/onnx/test_utility_funs.py` to normalize scope names of ONNX nodes ([link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-da71d2c81c9dc7ac0c47ff086fded82e4edcb67ba0cd3d8b5c983d7467343bc7R32-R58)) * Use `_remove_test_environment_prefix_from_scope_name` function to compare scope names of ONNX nodes in `TestUtilityFuns` class in `test/onnx/test_utility_funs.py` ([link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-da71d2c81c9dc7ac0c47ff086fded82e4edcb67ba0cd3d8b5c983d7467343bc7L1099-R1133), [link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-da71d2c81c9dc7ac0c47ff086fded82e4edcb67ba0cd3d8b5c983d7467343bc7L1119-R1152), [link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-da71d2c81c9dc7ac0c47ff086fded82e4edcb67ba0cd3d8b5c983d7467343bc7L1170-R1188), [link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-da71d2c81c9dc7ac0c47ff086fded82e4edcb67ba0cd3d8b5c983d7467343bc7L1181-R1199), [link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-da71d2c81c9dc7ac0c47ff086fded82e4edcb67ba0cd3d8b5c983d7467343bc7L1220-R1239), [link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-da71d2c81c9dc7ac0c47ff086fded82e4edcb67ba0cd3d8b5c983d7467343bc7L1235-R1258)) Fixes #98626 Pull Request resolved: https://github.com/pytorch/pytorch/pull/99215 Approved by: https://github.com/huydhn, https://github.com/titaiwangms
2023-04-18 00:49:08 +00:00
# A subset of onnx tests that cannot run in parallel due to high memory usage.
ONNX_SERIAL_LIST = [
"onnx/test_models",
"onnx/test_models_quantized_onnxruntime",
"onnx/test_models_onnxruntime",
"onnx/test_custom_ops",
"onnx/test_utility_funs",
]
# A subset of our TEST list that validates PyTorch's ops, modules, and autograd function as expected
CORE_TEST_LIST = [
"test_autograd",
"test_autograd_fallback",
"test_modules",
"test_nn",
"test_ops",
"test_ops_gradients",
"test_ops_fwd_gradients",
"test_ops_jit",
"test_torch",
]
# if a test file takes longer than 5 min, we add it to TARGET_DET_LIST
SLOW_TEST_THRESHOLD = 300
DISTRIBUTED_TESTS_CONFIG = {}
if dist.is_available():
DISTRIBUTED_TESTS_CONFIG["test"] = {"WORLD_SIZE": "1"}
if not TEST_WITH_ROCM and dist.is_mpi_available():
DISTRIBUTED_TESTS_CONFIG["mpi"] = {
"WORLD_SIZE": "3",
"TEST_REPORT_SOURCE_OVERRIDE": "dist-mpi",
}
if dist.is_nccl_available():
DISTRIBUTED_TESTS_CONFIG["nccl"] = {
"WORLD_SIZE": f"{torch.cuda.device_count()}",
"TEST_REPORT_SOURCE_OVERRIDE": "dist-nccl",
}
if dist.is_gloo_available():
DISTRIBUTED_TESTS_CONFIG["gloo"] = {
# TODO: retire testing gloo with CUDA
"WORLD_SIZE": f"{torch.cuda.device_count()}",
"TEST_REPORT_SOURCE_OVERRIDE": "dist-gloo",
}
# Test with UCC backend is deprecated.
# See https://github.com/pytorch/pytorch/pull/137161
# if dist.is_ucc_available():
# DISTRIBUTED_TESTS_CONFIG["ucc"] = {
# "WORLD_SIZE": f"{torch.cuda.device_count()}",
# "TEST_REPORT_SOURCE_OVERRIDE": "dist-ucc",
# "UCX_TLS": "tcp,cuda",
# "UCC_TLS": "nccl,ucp,cuda",
# "UCC_TL_UCP_TUNE": "cuda:0", # don't use UCP TL on CUDA as it is not well supported
# "UCC_EC_CUDA_USE_COOPERATIVE_LAUNCH": "n", # CI nodes (M60) fail if it is on
# }
# https://stackoverflow.com/questions/2549939/get-signal-names-from-numbers-in-python
SIGNALS_TO_NAMES_DICT = {
getattr(signal, n): n for n in dir(signal) if n.startswith("SIG") and "_" not in n
}
CPP_EXTENSIONS_ERROR = """
Add option to use ninja to compile ahead-of-time cpp_extensions (#32495) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/32495 Background ------------------------------ Previously, ninja was used to compile+link inline cpp_extensions and ahead-of-time cpp_extensions were compiled with distutils. This PR adds the ability to compile (but not link) ahead-of-time cpp_extensions with ninja. The main motivation for this is to speed up cpp_extension builds: distutils does not make use of parallelism. With this PR, using the new option, on my machine, - torchvision compilation goes from 3m43s to 49s - nestedtensor compilation goes from 2m0s to 28s. User-facing changes ------------------------------ I added a `use_ninja` flag to BuildExtension. This defaults to `True`. When `use_ninja` is True: - it will attempt to use ninja. - If we cannot use ninja, then this throws a warning and falls back to distutils. - Situations we cannot use ninja: Windows (NYI, I'll open a new issue for this), if ninja cannot be found on the system. Implementation Details ------------------------------ This PR makes this change in two steps. Please me know if it would be easier to review this if I split this up into a stacked diff. Those changes are: 1) refactor _write_ninja_file to separate the policy (what compiler flags to pass) from the mechanism (how to write the ninja file and do compilation). 2) call _write_ninja_file and _run_ninja_build while building ahead-of-time cpp_extensions. These are only used to compile objects; distutils still handles the linking. Change 1: refactor _write_ninja_file to seperate policy from mechanism - I split _write_ninja_file into: _write_ninja_file and _write_ninja_file_to_build_library - I renamed _build_extension_module to _run_ninja_build Change 2: Call _write_ninja_file while building ahead-of-time cpp_extensions - _write_ninja_file_and_compile_objects calls _write_ninja_file to only build object files. - We monkey-patch distutils.CCompiler.compile to call _write_ninja_files_and_compile_objects - distutils still handles the linking step. The linking step is not a bottleneck so it was not a concern. - This change only works on unix-based systems. Our code for windows goes down a different codepath and I did not want to mess with that. - If a system does not support ninja, we raise a warning and fall back to the original compilation path. Test Plan ------------------------------ Adhoc testing - I built torchvision using pytorch master and printed out the build commands. Next, I used this branch to build torchvision and looked at the ninja file. I compared the ninja file with the build commands and asserted that they were functionally the same. - I repeated the above for pytorch/nestedtensor. PyTorch test suite - I split `test_cpp_extensions` into `test_cpp_extensions_aot` and `test_cpp_extensions_jit`. The AOT (ahead-of-time) version tests ahead-of-time and the JIT version tests just-in-time (not to be confused with TorchScript) - `test_cpp_extensions_aot` gets run TWICE by run_test.py, once with a module that was built with ninja, and once with a module that was built without ninja. - run_test.py asserts that when we are building with use_ninja=True, ninja is actually available on the system. Test Plan: Imported from OSS Differential Revision: D19730432 Pulled By: zou3519 fbshipit-source-id: 819590d01cf65e8da5a1e8019b8b3084792fee90
2020-02-06 02:44:19 +00:00
Ninja (https://ninja-build.org) is required for some of the C++ extensions
tests, but it could not be found. Install ninja with `pip install ninja`
or `conda install ninja`. Alternatively, disable said tests with
`run_test.py --exclude test_cpp_extensions_aot_ninja test_cpp_extensions_jit`.
"""
PYTORCH_COLLECT_COVERAGE = bool(os.environ.get("PYTORCH_COLLECT_COVERAGE"))
2018-03-09 21:02:02 +00:00
JIT_EXECUTOR_TESTS = [
"test_jit_profiling",
"test_jit_legacy",
"test_jit_fuser_legacy",
]
INDUCTOR_TESTS = [test for test in TESTS if test.startswith(INDUCTOR_TEST_PREFIX)]
Do not collect and skip non-disabled tests when rerunning disabled tests (#102107) The console log blows up to much when running in rerun disabled tests mode (x50) https://hud.pytorch.org/pytorch/pytorch/commit/e132f09e8878418fb98a4b76a441a324452354ec. Each log is around 1GB and the whole uncompressed logs is ~50GB. After compression, it will be around 1GB, still too big. The increase comes mainly from the multiple SKIPPED message for non-disabled tests, which is expected due to how SkipTest and pytest-flakyfinder currently work. I update `test/conftest.py` to completely ignore skipped tests when rerunning disabled test instead of collecting then skipping 50 tests each. The benefit of doing is is much more than I originally expect: * Rerun disabled tests jobs now finish in less than half an hour as they should be * Fix OOM runner crash because of too many collected tests * Fix verbosity issue as now only disabled tests are run x50 times. There are only few hundreds of them atm * Fix timed out issue when rerunning disabled distributed and ASAN tests. They are just too slow when running at x50 ### Testing When rerunning disabled tests https://github.com/pytorch/pytorch/actions/runs/5084508614, only disabled tests on the platform are run, for example `test_ops_jit` on https://ossci-raw-job-status.s3.amazonaws.com/log/13770164954 only ran 100 tests (`test_variant_consistency_jit_linalg_lu_cuda_float32` + `test_variant_consistency_jit_linalg_lu_factor_cuda_complex64`) x50. ``` Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'test_ops_jit.py', '--shard-id=1', '--num-shards=2', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '--sc=test_ops_jit_1', '--flake-finder', '--flake-runs=50', '--import-slow-tests', '--import-disabled-tests', '--rerun-disabled-tests'] ... [2023-05-25 21:32:49.763856] Expand the folded group to see the log file of test_ops_jit 2/2 ##[group]PRINTING LOG FILE of test_ops_jit 2/2 (/var/lib/jenkins/workspace/test/test-reports/test_ops_jit_h2wr_t2c.log) Test results will be stored in test-reports/python-pytest/test_ops_jit/test_ops_jit-51a83bd44549074e.xml ============================= test session starts ============================== platform linux -- Python 3.10.11, pytest-7.3.1, pluggy-1.0.0 -- /opt/conda/envs/py_3.10/bin/python cachedir: .pytest_cache hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow] rootdir: /var/lib/jenkins/workspace configfile: pytest.ini plugins: hypothesis-5.35.1, cpp-2.3.0, flakefinder-1.1.0, rerunfailures-11.1.2, shard-0.1.2, xdist-3.3.0, xdoctest-1.1.0 collecting ... collected 1084 items Running 100 items in this shard: test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_linalg_lu_cuda_float32 (x50), test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_linalg_lu_factor_cuda_complex64 (x50) stepcurrent: Cannot find last run test, not skipping test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_linalg_lu_cuda_float32 PASSED [2.1876s] [ 1%] test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_linalg_lu_factor_cuda_complex64 PASSED [4.5615s] [ 2%] ``` * [pull](https://github.com/pytorch/pytorch/actions/runs/5093566864) * [trunk](https://github.com/pytorch/pytorch/actions/runs/5095364311) * [periodic](https://github.com/pytorch/pytorch/actions/runs/5095378850) * [slow](https://github.com/pytorch/pytorch/actions/runs/5095390285) Pull Request resolved: https://github.com/pytorch/pytorch/pull/102107 Approved by: https://github.com/clee2000, https://github.com/malfet
2023-05-27 12:10:32 +00:00
DISTRIBUTED_TESTS = [test for test in TESTS if test.startswith(DISTRIBUTED_TEST_PREFIX)]
TORCH_EXPORT_TESTS = [test for test in TESTS if test.startswith("export")]
AOT_DISPATCH_TESTS = [
test for test in TESTS if test.startswith("functorch/test_aotdispatch")
]
FUNCTORCH_TESTS = [test for test in TESTS if test.startswith("functorch")]
[ONNX] Run ONNX tests as part of standard run_test script (#99215) <!-- copilot:all --> ### <samp>🤖 Generated by Copilot at dcbf7e2</samp> ### Summary 📝🧹🚩 <!-- 1. 📝 for simplifying the `./scripts/onnx/test.sh` script 2. 🧹 for refactoring the `test/onnx/dynamo/test_exporter_api.py` file 3. 🚩 for adding the `--onnx` flag to `test/run_test.py` and updating the `TESTS` list --> This pull request improves the ONNX testing infrastructure in PyTorch by refactoring the test code, normalizing the scope names, adding a flag to run only the ONNX tests, and simplifying the test script. > _To export PyTorch models to ONNX_ > _We refactored some scripts and contexts_ > _We used `common_utils`_ > _And normalized the scopes_ > _And added a flag to run the tests_ ### Walkthrough * Simplify `./scripts/onnx/test.sh` to use `run_test.py` with `--onnx` flag instead of `pytest` ([link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-0017f5b22ae1329acb0f54af8d9811c9b6180a72dac70d7a5b89d7c23c958198L44-R46)) * Remove `onnx` test from `TESTS` list in `test/run_test.py` ([link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-e72503c9e3e8766e2d1bacf3fad7b88aa166e0e90a7e103e7df99357a35df8d7L127-R127)). Replace with `onnx_caffe2`. * Add `onnx/test_pytorch_onnx_onnxruntime_cuda` and `onnx/test_models` tests to `blocklisted_tests` list in `test/run_test.py` ([link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-e72503c9e3e8766e2d1bacf3fad7b88aa166e0e90a7e103e7df99357a35df8d7R154-R155)) * Add `ONNX_SERIAL_LIST` list to `test/run_test.py` to specify ONNX tests that must run serially ([link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-e72503c9e3e8766e2d1bacf3fad7b88aa166e0e90a7e103e7df99357a35df8d7R296-R301)) * Add `ONNX_TESTS` list to `test/run_test.py` to store all ONNX tests ([link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-e72503c9e3e8766e2d1bacf3fad7b88aa166e0e90a7e103e7df99357a35df8d7R370)) * Add `--onnx` flag to `parse_args` function in `test/run_test.py` to run only ONNX tests ([link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-e72503c9e3e8766e2d1bacf3fad7b88aa166e0e90a7e103e7df99357a35df8d7R920-R928)) * Include `ONNX_SERIAL_LIST` in `must_serial` function in `test/run_test.py` to run ONNX tests serially or parallelly based on memory usage ([link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-e72503c9e3e8766e2d1bacf3fad7b88aa166e0e90a7e103e7df99357a35df8d7R1120)) * Filter selected tests based on `--onnx` flag in `get_selected_tests` function in `test/run_test.py` to exclude non-ONNX tests ([link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-e72503c9e3e8766e2d1bacf3fad7b88aa166e0e90a7e103e7df99357a35df8d7R1158-R1165)) ### Other minor changes to accommodate this change * Replace `unittest` module with `common_utils.TestCase` in `test/onnx/dynamo/test_exporter_api.py` ([link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-4545f0c15c73ebe90a875e9bee6c5ca4b6b92fb1ed0ec5560d1568e0f6339d02L4), [link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-4545f0c15c73ebe90a875e9bee6c5ca4b6b92fb1ed0ec5560d1568e0f6339d02L29-R28), [link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-4545f0c15c73ebe90a875e9bee6c5ca4b6b92fb1ed0ec5560d1568e0f6339d02L71-R70), [link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-4545f0c15c73ebe90a875e9bee6c5ca4b6b92fb1ed0ec5560d1568e0f6339d02L147-R146)) * Import `TemporaryFileName` class from `common_utils` in `test/onnx/dynamo/test_exporter_api.py` ([link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-4545f0c15c73ebe90a875e9bee6c5ca4b6b92fb1ed0ec5560d1568e0f6339d02L19-R18)) * Use `common_utils.TemporaryFileName` instead of `TemporaryFileName` in `TestDynamoExportAPI` class in `test/onnx/dynamo/test_exporter_api.py` ([link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-4545f0c15c73ebe90a875e9bee6c5ca4b6b92fb1ed0ec5560d1568e0f6339d02L92-R91), [link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-4545f0c15c73ebe90a875e9bee6c5ca4b6b92fb1ed0ec5560d1568e0f6339d02L110-R109), [link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-4545f0c15c73ebe90a875e9bee6c5ca4b6b92fb1ed0ec5560d1568e0f6339d02L129-R128)) * Use `common_utils.run_tests` instead of `unittest.main` in `test/onnx/dynamo/test_exporter_api.py` ([link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-4545f0c15c73ebe90a875e9bee6c5ca4b6b92fb1ed0ec5560d1568e0f6339d02L155-R154)) * Add `re` module to `test/onnx/test_utility_funs.py` ([link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-da71d2c81c9dc7ac0c47ff086fded82e4edcb67ba0cd3d8b5c983d7467343bc7R6)) * Add `_remove_test_environment_prefix_from_scope_name` function to `test/onnx/test_utility_funs.py` to normalize scope names of ONNX nodes ([link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-da71d2c81c9dc7ac0c47ff086fded82e4edcb67ba0cd3d8b5c983d7467343bc7R32-R58)) * Use `_remove_test_environment_prefix_from_scope_name` function to compare scope names of ONNX nodes in `TestUtilityFuns` class in `test/onnx/test_utility_funs.py` ([link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-da71d2c81c9dc7ac0c47ff086fded82e4edcb67ba0cd3d8b5c983d7467343bc7L1099-R1133), [link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-da71d2c81c9dc7ac0c47ff086fded82e4edcb67ba0cd3d8b5c983d7467343bc7L1119-R1152), [link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-da71d2c81c9dc7ac0c47ff086fded82e4edcb67ba0cd3d8b5c983d7467343bc7L1170-R1188), [link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-da71d2c81c9dc7ac0c47ff086fded82e4edcb67ba0cd3d8b5c983d7467343bc7L1181-R1199), [link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-da71d2c81c9dc7ac0c47ff086fded82e4edcb67ba0cd3d8b5c983d7467343bc7L1220-R1239), [link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-da71d2c81c9dc7ac0c47ff086fded82e4edcb67ba0cd3d8b5c983d7467343bc7L1235-R1258)) Fixes #98626 Pull Request resolved: https://github.com/pytorch/pytorch/pull/99215 Approved by: https://github.com/huydhn, https://github.com/titaiwangms
2023-04-18 00:49:08 +00:00
ONNX_TESTS = [test for test in TESTS if test.startswith("onnx")]
Discover and run C++ tests with run_test.py (#99559) This depends on [pytest-cpp](https://github.com/pytest-dev/pytest-cpp) to discover and run C++ tests with pytest. C++ tests are built under `${WORKSPACE}/build/bin` directory and copied to the test job under the same path. * To expose them to `run_test`, I choose to use the mock path prefix `cpp`, for example `build/bin/c10_Array_test` would be named as `cpp/c10_Array_test` and the `python test/run_test.py --cpp -i cpp/c10_Array_test` would run the test in the same way as other Python tests. I could copy them from `build/bin` to `test/cpp`, but it will be mixed with the source code and CMake file. So this looks easier * Some executable under `build/bin` are not C++ tests, and they are exclude, for example `build/bin/torch_shm_manager` * C++ tests need to run with pytest directly as python command doesn't understand it * The change is gated by the new `--cpp` argument to `run_test.py`, for example `python test/run_test.py --cpp` will run all available C++ tests * The tests can be run in parallel * Failing tests can be retried with `--reruns=2` and `--sw` ``` ============================= test session starts ============================== platform darwin -- Python 3.9.15, pytest-7.2.0, pluggy-1.0.0 -- /Users/huydo/miniconda3/envs/py3.9/bin/python3 cachedir: .pytest_cache hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/Users/huydo/Storage/mine/pytorch/test/.hypothesis/examples') rootdir: /Users/huydo/Storage/mine/pytorch, configfile: pytest.ini plugins: xdoctest-1.1.0, cpp-2.3.0, rerunfailures-10.3, shard-0.1.2, flakefinder-1.1.0, hypothesis-6.56.4, xdist-3.0.2, repeat-0.9.1 collecting ... collected 3 items / 2 deselected / 1 selected Running 1 items in this shard: build/bin/scalar_tensor_test::TestScalarTensor.TestScalarTensorMPS stepwise: skipping 2 already passed items. ../build/bin/scalar_tensor_test::TestScalarTensor::TestScalarTensorMPS RERUN [100%] ../build/bin/scalar_tensor_test::TestScalarTensor::TestScalarTensorMPS RERUN [100%] ../build/bin/scalar_tensor_test::TestScalarTensor::TestScalarTensorMPS FAILED [100%] ``` * `--import-slow-tests` and `--import-disabled-tests` won't work for now and that's ok to have it as a future task. I also add `pytest-cpp==2.3.0` to Linux Docker, MacOS, and Windows. ### Testing Build PyTorch and run `python test/run_test.py --cpp` on my laptop. CI change would come later in a separate PR. Also running `python test/run_test.py --help` now shows all C++ test discovered under `build/bin` Pull Request resolved: https://github.com/pytorch/pytorch/pull/99559 Approved by: https://github.com/clee2000
2023-04-22 00:23:31 +00:00
CPP_TESTS = [test for test in TESTS if test.startswith(CPP_TEST_PREFIX)]
TESTS_REQUIRING_LAPACK = [
"distributions/test_constraints",
"distributions/test_distributions",
]
# These are just the slowest ones, this isn't an exhaustive list.
TESTS_NOT_USING_GRADCHECK = [
# Note that you should use skipIfSlowGradcheckEnv if you do not wish to
# skip all the tests in that file, e.g. test_mps
"doctests",
"test_meta",
"test_hub",
"test_fx",
"test_decomp",
"test_cpp_extensions_jit",
"test_jit",
"test_ops",
"test_ops_jit",
"dynamo/test_recompile_ux",
"inductor/test_smoke",
"test_quantization",
]
def print_to_stderr(message):
print(message, file=sys.stderr)
Discover and run C++ tests with run_test.py (#99559) This depends on [pytest-cpp](https://github.com/pytest-dev/pytest-cpp) to discover and run C++ tests with pytest. C++ tests are built under `${WORKSPACE}/build/bin` directory and copied to the test job under the same path. * To expose them to `run_test`, I choose to use the mock path prefix `cpp`, for example `build/bin/c10_Array_test` would be named as `cpp/c10_Array_test` and the `python test/run_test.py --cpp -i cpp/c10_Array_test` would run the test in the same way as other Python tests. I could copy them from `build/bin` to `test/cpp`, but it will be mixed with the source code and CMake file. So this looks easier * Some executable under `build/bin` are not C++ tests, and they are exclude, for example `build/bin/torch_shm_manager` * C++ tests need to run with pytest directly as python command doesn't understand it * The change is gated by the new `--cpp` argument to `run_test.py`, for example `python test/run_test.py --cpp` will run all available C++ tests * The tests can be run in parallel * Failing tests can be retried with `--reruns=2` and `--sw` ``` ============================= test session starts ============================== platform darwin -- Python 3.9.15, pytest-7.2.0, pluggy-1.0.0 -- /Users/huydo/miniconda3/envs/py3.9/bin/python3 cachedir: .pytest_cache hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/Users/huydo/Storage/mine/pytorch/test/.hypothesis/examples') rootdir: /Users/huydo/Storage/mine/pytorch, configfile: pytest.ini plugins: xdoctest-1.1.0, cpp-2.3.0, rerunfailures-10.3, shard-0.1.2, flakefinder-1.1.0, hypothesis-6.56.4, xdist-3.0.2, repeat-0.9.1 collecting ... collected 3 items / 2 deselected / 1 selected Running 1 items in this shard: build/bin/scalar_tensor_test::TestScalarTensor.TestScalarTensorMPS stepwise: skipping 2 already passed items. ../build/bin/scalar_tensor_test::TestScalarTensor::TestScalarTensorMPS RERUN [100%] ../build/bin/scalar_tensor_test::TestScalarTensor::TestScalarTensorMPS RERUN [100%] ../build/bin/scalar_tensor_test::TestScalarTensor::TestScalarTensorMPS FAILED [100%] ``` * `--import-slow-tests` and `--import-disabled-tests` won't work for now and that's ok to have it as a future task. I also add `pytest-cpp==2.3.0` to Linux Docker, MacOS, and Windows. ### Testing Build PyTorch and run `python test/run_test.py --cpp` on my laptop. CI change would come later in a separate PR. Also running `python test/run_test.py --help` now shows all C++ test discovered under `build/bin` Pull Request resolved: https://github.com/pytorch/pytorch/pull/99559 Approved by: https://github.com/clee2000
2023-04-22 00:23:31 +00:00
def get_executable_command(options, disable_coverage=False, is_cpp_test=False):
if options.coverage and not disable_coverage:
Discover and run C++ tests with run_test.py (#99559) This depends on [pytest-cpp](https://github.com/pytest-dev/pytest-cpp) to discover and run C++ tests with pytest. C++ tests are built under `${WORKSPACE}/build/bin` directory and copied to the test job under the same path. * To expose them to `run_test`, I choose to use the mock path prefix `cpp`, for example `build/bin/c10_Array_test` would be named as `cpp/c10_Array_test` and the `python test/run_test.py --cpp -i cpp/c10_Array_test` would run the test in the same way as other Python tests. I could copy them from `build/bin` to `test/cpp`, but it will be mixed with the source code and CMake file. So this looks easier * Some executable under `build/bin` are not C++ tests, and they are exclude, for example `build/bin/torch_shm_manager` * C++ tests need to run with pytest directly as python command doesn't understand it * The change is gated by the new `--cpp` argument to `run_test.py`, for example `python test/run_test.py --cpp` will run all available C++ tests * The tests can be run in parallel * Failing tests can be retried with `--reruns=2` and `--sw` ``` ============================= test session starts ============================== platform darwin -- Python 3.9.15, pytest-7.2.0, pluggy-1.0.0 -- /Users/huydo/miniconda3/envs/py3.9/bin/python3 cachedir: .pytest_cache hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/Users/huydo/Storage/mine/pytorch/test/.hypothesis/examples') rootdir: /Users/huydo/Storage/mine/pytorch, configfile: pytest.ini plugins: xdoctest-1.1.0, cpp-2.3.0, rerunfailures-10.3, shard-0.1.2, flakefinder-1.1.0, hypothesis-6.56.4, xdist-3.0.2, repeat-0.9.1 collecting ... collected 3 items / 2 deselected / 1 selected Running 1 items in this shard: build/bin/scalar_tensor_test::TestScalarTensor.TestScalarTensorMPS stepwise: skipping 2 already passed items. ../build/bin/scalar_tensor_test::TestScalarTensor::TestScalarTensorMPS RERUN [100%] ../build/bin/scalar_tensor_test::TestScalarTensor::TestScalarTensorMPS RERUN [100%] ../build/bin/scalar_tensor_test::TestScalarTensor::TestScalarTensorMPS FAILED [100%] ``` * `--import-slow-tests` and `--import-disabled-tests` won't work for now and that's ok to have it as a future task. I also add `pytest-cpp==2.3.0` to Linux Docker, MacOS, and Windows. ### Testing Build PyTorch and run `python test/run_test.py --cpp` on my laptop. CI change would come later in a separate PR. Also running `python test/run_test.py --help` now shows all C++ test discovered under `build/bin` Pull Request resolved: https://github.com/pytorch/pytorch/pull/99559 Approved by: https://github.com/clee2000
2023-04-22 00:23:31 +00:00
if not is_cpp_test:
executable = ["coverage", "run", "--parallel-mode", "--source=torch"]
else:
# TODO: C++ with coverage is not yet supported
executable = []
else:
Discover and run C++ tests with run_test.py (#99559) This depends on [pytest-cpp](https://github.com/pytest-dev/pytest-cpp) to discover and run C++ tests with pytest. C++ tests are built under `${WORKSPACE}/build/bin` directory and copied to the test job under the same path. * To expose them to `run_test`, I choose to use the mock path prefix `cpp`, for example `build/bin/c10_Array_test` would be named as `cpp/c10_Array_test` and the `python test/run_test.py --cpp -i cpp/c10_Array_test` would run the test in the same way as other Python tests. I could copy them from `build/bin` to `test/cpp`, but it will be mixed with the source code and CMake file. So this looks easier * Some executable under `build/bin` are not C++ tests, and they are exclude, for example `build/bin/torch_shm_manager` * C++ tests need to run with pytest directly as python command doesn't understand it * The change is gated by the new `--cpp` argument to `run_test.py`, for example `python test/run_test.py --cpp` will run all available C++ tests * The tests can be run in parallel * Failing tests can be retried with `--reruns=2` and `--sw` ``` ============================= test session starts ============================== platform darwin -- Python 3.9.15, pytest-7.2.0, pluggy-1.0.0 -- /Users/huydo/miniconda3/envs/py3.9/bin/python3 cachedir: .pytest_cache hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/Users/huydo/Storage/mine/pytorch/test/.hypothesis/examples') rootdir: /Users/huydo/Storage/mine/pytorch, configfile: pytest.ini plugins: xdoctest-1.1.0, cpp-2.3.0, rerunfailures-10.3, shard-0.1.2, flakefinder-1.1.0, hypothesis-6.56.4, xdist-3.0.2, repeat-0.9.1 collecting ... collected 3 items / 2 deselected / 1 selected Running 1 items in this shard: build/bin/scalar_tensor_test::TestScalarTensor.TestScalarTensorMPS stepwise: skipping 2 already passed items. ../build/bin/scalar_tensor_test::TestScalarTensor::TestScalarTensorMPS RERUN [100%] ../build/bin/scalar_tensor_test::TestScalarTensor::TestScalarTensorMPS RERUN [100%] ../build/bin/scalar_tensor_test::TestScalarTensor::TestScalarTensorMPS FAILED [100%] ``` * `--import-slow-tests` and `--import-disabled-tests` won't work for now and that's ok to have it as a future task. I also add `pytest-cpp==2.3.0` to Linux Docker, MacOS, and Windows. ### Testing Build PyTorch and run `python test/run_test.py --cpp` on my laptop. CI change would come later in a separate PR. Also running `python test/run_test.py --help` now shows all C++ test discovered under `build/bin` Pull Request resolved: https://github.com/pytorch/pytorch/pull/99559 Approved by: https://github.com/clee2000
2023-04-22 00:23:31 +00:00
if not is_cpp_test:
executable = [sys.executable, "-bb"]
else:
executable = ["pytest"]
return executable
def run_test(
test_module: ShardedTest,
test_directory,
options,
launcher_cmd=None,
extra_unittest_args=None,
env=None,
print_log=True,
) -> int:
scribe_token = os.getenv("SCRIBE_GRAPHQL_ACCESS_TOKEN", "")
if scribe_token:
print_to_stderr("SCRIBE_GRAPHQL_ACCESS_TOKEN is set")
else:
print_to_stderr("SCRIBE_GRAPHQL_ACCESS_TOKEN is NOT set")
env = env or os.environ.copy()
maybe_set_hip_visible_devies()
unittest_args = options.additional_args.copy()
test_file = test_module.name
stepcurrent_key = test_file
Do not collect and skip non-disabled tests when rerunning disabled tests (#102107) The console log blows up to much when running in rerun disabled tests mode (x50) https://hud.pytorch.org/pytorch/pytorch/commit/e132f09e8878418fb98a4b76a441a324452354ec. Each log is around 1GB and the whole uncompressed logs is ~50GB. After compression, it will be around 1GB, still too big. The increase comes mainly from the multiple SKIPPED message for non-disabled tests, which is expected due to how SkipTest and pytest-flakyfinder currently work. I update `test/conftest.py` to completely ignore skipped tests when rerunning disabled test instead of collecting then skipping 50 tests each. The benefit of doing is is much more than I originally expect: * Rerun disabled tests jobs now finish in less than half an hour as they should be * Fix OOM runner crash because of too many collected tests * Fix verbosity issue as now only disabled tests are run x50 times. There are only few hundreds of them atm * Fix timed out issue when rerunning disabled distributed and ASAN tests. They are just too slow when running at x50 ### Testing When rerunning disabled tests https://github.com/pytorch/pytorch/actions/runs/5084508614, only disabled tests on the platform are run, for example `test_ops_jit` on https://ossci-raw-job-status.s3.amazonaws.com/log/13770164954 only ran 100 tests (`test_variant_consistency_jit_linalg_lu_cuda_float32` + `test_variant_consistency_jit_linalg_lu_factor_cuda_complex64`) x50. ``` Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'test_ops_jit.py', '--shard-id=1', '--num-shards=2', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '--sc=test_ops_jit_1', '--flake-finder', '--flake-runs=50', '--import-slow-tests', '--import-disabled-tests', '--rerun-disabled-tests'] ... [2023-05-25 21:32:49.763856] Expand the folded group to see the log file of test_ops_jit 2/2 ##[group]PRINTING LOG FILE of test_ops_jit 2/2 (/var/lib/jenkins/workspace/test/test-reports/test_ops_jit_h2wr_t2c.log) Test results will be stored in test-reports/python-pytest/test_ops_jit/test_ops_jit-51a83bd44549074e.xml ============================= test session starts ============================== platform linux -- Python 3.10.11, pytest-7.3.1, pluggy-1.0.0 -- /opt/conda/envs/py_3.10/bin/python cachedir: .pytest_cache hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow] rootdir: /var/lib/jenkins/workspace configfile: pytest.ini plugins: hypothesis-5.35.1, cpp-2.3.0, flakefinder-1.1.0, rerunfailures-11.1.2, shard-0.1.2, xdist-3.3.0, xdoctest-1.1.0 collecting ... collected 1084 items Running 100 items in this shard: test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_linalg_lu_cuda_float32 (x50), test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_linalg_lu_factor_cuda_complex64 (x50) stepcurrent: Cannot find last run test, not skipping test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_linalg_lu_cuda_float32 PASSED [2.1876s] [ 1%] test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_linalg_lu_factor_cuda_complex64 PASSED [4.5615s] [ 2%] ``` * [pull](https://github.com/pytorch/pytorch/actions/runs/5093566864) * [trunk](https://github.com/pytorch/pytorch/actions/runs/5095364311) * [periodic](https://github.com/pytorch/pytorch/actions/runs/5095378850) * [slow](https://github.com/pytorch/pytorch/actions/runs/5095390285) Pull Request resolved: https://github.com/pytorch/pytorch/pull/102107 Approved by: https://github.com/clee2000, https://github.com/malfet
2023-05-27 12:10:32 +00:00
is_distributed_test = test_file.startswith(DISTRIBUTED_TEST_PREFIX)
is_cpp_test = test_file.startswith(CPP_TEST_PREFIX)
# NB: Rerun disabled tests depends on pytest-flakefinder and it doesn't work with
# pytest-cpp atm. We also don't have support to disable C++ test yet, so it's ok
# to just return successfully here
if is_cpp_test and RERUN_DISABLED_TESTS:
print_to_stderr(
"Skipping C++ tests when running under RERUN_DISABLED_TESTS mode"
)
return 0
if is_cpp_test:
More random stepcurrent (#113620) Distributed tests for different backends have the same name, so they end up clashing using the current stepcurrent key, so tests were not being run. Disabled the following tests because they are failing: test_ddp_has_finalized test_broadcast_object_list <details> ``` 2023-11-14T06:44:01.0428686Z 2023-11-14T06:44:01.0430447Z distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_broadcast_object_list <- ../../../../opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/distributed/distributed_test.py INFO:numba.cuda.cudadrv.driver:init 2023-11-14T06:44:01.0431048Z [1699943450.893723] [99f90b6e6ff3:10028:0] ucc_context.c:402 UCC ERROR failed to create tl context for cuda 2023-11-14T06:44:01.0431625Z [1699943450.914385] [99f90b6e6ff3:10029:0] ucc_context.c:402 UCC ERROR failed to create tl context for cuda 2023-11-14T06:44:01.0432314Z [rank1]:[2023-11-14 06:30:51,405] torch.testing._internal.common_distributed: [ERROR] Caught exception: 2023-11-14T06:44:01.0433178Z [rank1]:[2023-11-14 06:30:51,405] torch.testing._internal.common_distributed: [ERROR] Traceback (most recent call last): 2023-11-14T06:44:01.0434677Z [rank1]:[2023-11-14 06:30:51,405] torch.testing._internal.common_distributed: [ERROR] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 658, in run_test 2023-11-14T06:44:01.0435435Z [rank1]:[2023-11-14 06:30:51,405] torch.testing._internal.common_distributed: [ERROR] getattr(self, test_name)() 2023-11-14T06:44:01.0436895Z [rank1]:[2023-11-14 06:30:51,405] torch.testing._internal.common_distributed: [ERROR] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 544, in wrapper 2023-11-14T06:44:01.0437500Z [rank1]:[2023-11-14 06:30:51,405] torch.testing._internal.common_distributed: [ERROR] fn() 2023-11-14T06:44:01.0438917Z [rank1]:[2023-11-14 06:30:51,405] torch.testing._internal.common_distributed: [ERROR] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2536, in wrapper 2023-11-14T06:44:01.0439637Z [rank1]:[2023-11-14 06:30:51,405] torch.testing._internal.common_distributed: [ERROR] method(*args, **kwargs) 2023-11-14T06:44:01.0441122Z [rank1]:[2023-11-14 06:30:51,405] torch.testing._internal.common_distributed: [ERROR] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 143, in wrapper 2023-11-14T06:44:01.0441873Z [rank1]:[2023-11-14 06:30:51,405] torch.testing._internal.common_distributed: [ERROR] return func(*args, **kwargs) 2023-11-14T06:44:01.0443340Z [rank1]:[2023-11-14 06:30:51,405] torch.testing._internal.common_distributed: [ERROR] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 274, in wrapper 2023-11-14T06:44:01.0444077Z [rank1]:[2023-11-14 06:30:51,405] torch.testing._internal.common_distributed: [ERROR] ret = func(*args, **kwargs) 2023-11-14T06:44:01.0445769Z [rank1]:[2023-11-14 06:30:51,405] torch.testing._internal.common_distributed: [ERROR] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/distributed/distributed_test.py", line 7717, in test_broadcast_object_list 2023-11-14T06:44:01.0446732Z [rank1]:[2023-11-14 06:30:51,405] torch.testing._internal.common_distributed: [ERROR] return self._test_broadcast_object_list() 2023-11-14T06:44:01.0448433Z [rank1]:[2023-11-14 06:30:51,405] torch.testing._internal.common_distributed: [ERROR] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/distributed/distributed_test.py", line 7683, in _test_broadcast_object_list 2023-11-14T06:44:01.0449187Z [rank1]:[2023-11-14 06:30:51,405] torch.testing._internal.common_distributed: [ERROR] dist.broadcast_object_list( 2023-11-14T06:44:01.0450553Z [rank1]:[2023-11-14 06:30:51,405] torch.testing._internal.common_distributed: [ERROR] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/c10d_logger.py", line 72, in wrapper 2023-11-14T06:44:01.0451621Z [rank1]:[2023-11-14 06:30:51,405] torch.testing._internal.common_distributed: [ERROR] return func(*args, **kwargs) 2023-11-14T06:44:01.0453161Z [rank1]:[2023-11-14 06:30:51,405] torch.testing._internal.common_distributed: [ERROR] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 2650, in broadcast_object_list 2023-11-14T06:44:01.0454065Z [rank1]:[2023-11-14 06:30:51,405] torch.testing._internal.common_distributed: [ERROR] broadcast(object_sizes_tensor, src=src, group=group) 2023-11-14T06:44:01.0455441Z [rank1]:[2023-11-14 06:30:51,405] torch.testing._internal.common_distributed: [ERROR] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/c10d_logger.py", line 72, in wrapper 2023-11-14T06:44:01.0456183Z [rank1]:[2023-11-14 06:30:51,405] torch.testing._internal.common_distributed: [ERROR] return func(*args, **kwargs) 2023-11-14T06:44:01.0457775Z [rank1]:[2023-11-14 06:30:51,405] torch.testing._internal.common_distributed: [ERROR] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 1947, in broadcast 2023-11-14T06:44:01.0458649Z [rank1]:[2023-11-14 06:30:51,405] torch.testing._internal.common_distributed: [ERROR] work = default_pg.broadcast([tensor], opts) 2023-11-14T06:44:01.0460923Z [rank1]:[2023-11-14 06:30:51,405] torch.testing._internal.common_distributed: [ERROR] RuntimeError: [/var/lib/jenkins/workspace/torch/csrc/distributed/c10d/ProcessGroupUCC.cpp:488] [Rank 1][ProcessGroupUCC-0][READY]failed to init cuda collective, error code -1: Operation is not supported, system error code 2 2023-11-14T06:44:01.0461471Z [rank1]:[2023-11-14 06:30:51,405] torch.testing._internal.common_distributed: [ERROR] 2023-11-14T06:44:01.0462430Z [rank1]:[2023-11-14 06:30:51,405] torch.testing._internal.common_distributed: [ERROR] To execute this test, run the following from the base repo dir: 2023-11-14T06:44:01.0463552Z [rank1]:[2023-11-14 06:30:51,405] torch.testing._internal.common_distributed: [ERROR] python test/distributed/test_distributed_spawn.py -k test_broadcast_object_list 2023-11-14T06:44:01.0464082Z [rank1]:[2023-11-14 06:30:51,405] torch.testing._internal.common_distributed: [ERROR] 2023-11-14T06:44:01.0465136Z [rank1]:[2023-11-14 06:30:51,405] torch.testing._internal.common_distributed: [ERROR] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2023-11-14T06:44:01.0465945Z [rank1]:[2023-11-14 06:30:51,405] torch.testing._internal.common_distributed: [ERROR] exiting process 1 with exit code: 10 2023-11-14T06:44:01.0466605Z [1699943451.005633] [99f90b6e6ff3:10029:0] parser.c:2034 UCX WARN unused environment variables: UCX_COMMIT; UCX_HOME 2023-11-14T06:44:01.0467303Z [1699943451.005633] [99f90b6e6ff3:10029:0] parser.c:2034 UCX WARN (set UCX_WARN_UNUSED_ENV_VARS=n to suppress this warning) 2023-11-14T06:44:01.0467972Z [rank0]:[2023-11-14 06:30:51,462] torch.testing._internal.common_distributed: [ERROR] Caught exception: 2023-11-14T06:44:01.0468743Z [rank0]:[2023-11-14 06:30:51,462] torch.testing._internal.common_distributed: [ERROR] Traceback (most recent call last): 2023-11-14T06:44:01.0470233Z [rank0]:[2023-11-14 06:30:51,462] torch.testing._internal.common_distributed: [ERROR] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 658, in run_test 2023-11-14T06:44:01.0471106Z [rank0]:[2023-11-14 06:30:51,462] torch.testing._internal.common_distributed: [ERROR] getattr(self, test_name)() 2023-11-14T06:44:01.0472581Z [rank0]:[2023-11-14 06:30:51,462] torch.testing._internal.common_distributed: [ERROR] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 544, in wrapper 2023-11-14T06:44:01.0473162Z [rank0]:[2023-11-14 06:30:51,462] torch.testing._internal.common_distributed: [ERROR] fn() 2023-11-14T06:44:01.0474581Z [rank0]:[2023-11-14 06:30:51,462] torch.testing._internal.common_distributed: [ERROR] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2536, in wrapper 2023-11-14T06:44:01.0475314Z [rank0]:[2023-11-14 06:30:51,462] torch.testing._internal.common_distributed: [ERROR] method(*args, **kwargs) 2023-11-14T06:44:01.0476776Z [rank0]:[2023-11-14 06:30:51,462] torch.testing._internal.common_distributed: [ERROR] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 143, in wrapper 2023-11-14T06:44:01.0477535Z [rank0]:[2023-11-14 06:30:51,462] torch.testing._internal.common_distributed: [ERROR] return func(*args, **kwargs) 2023-11-14T06:44:01.0478993Z [rank0]:[2023-11-14 06:30:51,462] torch.testing._internal.common_distributed: [ERROR] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 274, in wrapper 2023-11-14T06:44:01.0479886Z [rank0]:[2023-11-14 06:30:51,462] torch.testing._internal.common_distributed: [ERROR] ret = func(*args, **kwargs) 2023-11-14T06:44:01.0481593Z [rank0]:[2023-11-14 06:30:51,462] torch.testing._internal.common_distributed: [ERROR] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/distributed/distributed_test.py", line 7717, in test_broadcast_object_list 2023-11-14T06:44:01.0482429Z [rank0]:[2023-11-14 06:30:51,462] torch.testing._internal.common_distributed: [ERROR] return self._test_broadcast_object_list() 2023-11-14T06:44:01.0484145Z [rank0]:[2023-11-14 06:30:51,462] torch.testing._internal.common_distributed: [ERROR] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/distributed/distributed_test.py", line 7683, in _test_broadcast_object_list 2023-11-14T06:44:01.0484886Z [rank0]:[2023-11-14 06:30:51,462] torch.testing._internal.common_distributed: [ERROR] dist.broadcast_object_list( 2023-11-14T06:44:01.0486271Z [rank0]:[2023-11-14 06:30:51,462] torch.testing._internal.common_distributed: [ERROR] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/c10d_logger.py", line 72, in wrapper 2023-11-14T06:44:01.0487018Z [rank0]:[2023-11-14 06:30:51,462] torch.testing._internal.common_distributed: [ERROR] return func(*args, **kwargs) 2023-11-14T06:44:01.0488559Z [rank0]:[2023-11-14 06:30:51,462] torch.testing._internal.common_distributed: [ERROR] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 2650, in broadcast_object_list 2023-11-14T06:44:01.0489470Z [rank0]:[2023-11-14 06:30:51,462] torch.testing._internal.common_distributed: [ERROR] broadcast(object_sizes_tensor, src=src, group=group) 2023-11-14T06:44:01.0491078Z [rank0]:[2023-11-14 06:30:51,462] torch.testing._internal.common_distributed: [ERROR] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/c10d_logger.py", line 72, in wrapper 2023-11-14T06:44:01.0491912Z [rank0]:[2023-11-14 06:30:51,462] torch.testing._internal.common_distributed: [ERROR] return func(*args, **kwargs) 2023-11-14T06:44:01.0493369Z [rank0]:[2023-11-14 06:30:51,462] torch.testing._internal.common_distributed: [ERROR] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 1947, in broadcast 2023-11-14T06:44:01.0494419Z [rank0]:[2023-11-14 06:30:51,462] torch.testing._internal.common_distributed: [ERROR] work = default_pg.broadcast([tensor], opts) 2023-11-14T06:44:01.0496679Z [rank0]:[2023-11-14 06:30:51,462] torch.testing._internal.common_distributed: [ERROR] RuntimeError: [/var/lib/jenkins/workspace/torch/csrc/distributed/c10d/ProcessGroupUCC.cpp:488] [Rank 0][ProcessGroupUCC-0][READY]failed to init cuda collective, error code -1: Operation is not supported, system error code 2 2023-11-14T06:44:01.0497211Z [rank0]:[2023-11-14 06:30:51,462] torch.testing._internal.common_distributed: [ERROR] 2023-11-14T06:44:01.0498198Z [rank0]:[2023-11-14 06:30:51,462] torch.testing._internal.common_distributed: [ERROR] To execute this test, run the following from the base repo dir: 2023-11-14T06:44:01.0499291Z [rank0]:[2023-11-14 06:30:51,462] torch.testing._internal.common_distributed: [ERROR] python test/distributed/test_distributed_spawn.py -k test_broadcast_object_list 2023-11-14T06:44:01.0499838Z [rank0]:[2023-11-14 06:30:51,462] torch.testing._internal.common_distributed: [ERROR] 2023-11-14T06:44:01.0500881Z [rank0]:[2023-11-14 06:30:51,462] torch.testing._internal.common_distributed: [ERROR] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2023-11-14T06:44:01.0501667Z [rank0]:[2023-11-14 06:30:51,462] torch.testing._internal.common_distributed: [ERROR] exiting process 0 with exit code: 10 2023-11-14T06:44:01.0502343Z [1699943451.002362] [99f90b6e6ff3:10028:0] parser.c:2034 UCX WARN unused environment variables: UCX_COMMIT; UCX_HOME 2023-11-14T06:44:01.0503024Z [1699943451.002362] [99f90b6e6ff3:10028:0] parser.c:2034 UCX WARN (set UCX_WARN_UNUSED_ENV_VARS=n to suppress this warning) 2023-11-14T06:44:01.0503411Z ('RERUN', {'yellow': True}) [6.1102s] [100%] ``` </details> test_ddp_sync_bn_training_vs_eval <details> ``` 2023-11-14T06:44:01.1494815Z 2023-11-14T06:44:01.1496630Z distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_sync_bn_training_vs_eval <- ../../../../opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/distributed/distributed_test.py INFO:numba.cuda.cudadrv.driver:init 2023-11-14T06:44:01.1497290Z [1699943779.976037] [99f90b6e6ff3:10758:0] parser.c:2034 UCX WARN unused environment variables: UCX_COMMIT; UCX_HOME 2023-11-14T06:44:01.1498119Z [1699943779.976037] [99f90b6e6ff3:10758:0] parser.c:2034 UCX WARN (set UCX_WARN_UNUSED_ENV_VARS=n to suppress this warning) 2023-11-14T06:44:01.1498808Z STAGE:2023-11-14 06:36:20 10758:10758 ActivityProfilerController.cpp:314] Completed Stage: Warm Up 2023-11-14T06:44:01.1499465Z [1699943779.970792] [99f90b6e6ff3:10757:0] parser.c:2034 UCX WARN unused environment variables: UCX_COMMIT; UCX_HOME 2023-11-14T06:44:01.1500160Z [1699943779.970792] [99f90b6e6ff3:10757:0] parser.c:2034 UCX WARN (set UCX_WARN_UNUSED_ENV_VARS=n to suppress this warning) 2023-11-14T06:44:01.1500820Z STAGE:2023-11-14 06:36:20 10757:10757 ActivityProfilerController.cpp:314] Completed Stage: Warm Up 2023-11-14T06:44:01.1501556Z STAGE:2023-11-14 06:36:20 10758:10758 ActivityProfilerController.cpp:320] Completed Stage: Collection 2023-11-14T06:44:01.1502239Z STAGE:2023-11-14 06:36:20 10757:10757 ActivityProfilerController.cpp:320] Completed Stage: Collection 2023-11-14T06:44:01.1502952Z STAGE:2023-11-14 06:36:20 10757:10757 ActivityProfilerController.cpp:324] Completed Stage: Post Processing 2023-11-14T06:44:01.1503678Z STAGE:2023-11-14 06:36:20 10758:10758 ActivityProfilerController.cpp:324] Completed Stage: Post Processing 2023-11-14T06:44:01.1504350Z [rank0]:[2023-11-14 06:36:20,668] torch.testing._internal.common_distributed: [ERROR] Caught exception: 2023-11-14T06:44:01.1505119Z [rank0]:[2023-11-14 06:36:20,668] torch.testing._internal.common_distributed: [ERROR] Traceback (most recent call last): 2023-11-14T06:44:01.1506729Z [rank0]:[2023-11-14 06:36:20,668] torch.testing._internal.common_distributed: [ERROR] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 658, in run_test 2023-11-14T06:44:01.1507492Z [rank0]:[2023-11-14 06:36:20,668] torch.testing._internal.common_distributed: [ERROR] getattr(self, test_name)() 2023-11-14T06:44:01.1508992Z [rank0]:[2023-11-14 06:36:20,668] torch.testing._internal.common_distributed: [ERROR] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 544, in wrapper 2023-11-14T06:44:01.1509578Z [rank0]:[2023-11-14 06:36:20,668] torch.testing._internal.common_distributed: [ERROR] fn() 2023-11-14T06:44:01.1510994Z [rank0]:[2023-11-14 06:36:20,668] torch.testing._internal.common_distributed: [ERROR] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2536, in wrapper 2023-11-14T06:44:01.1511725Z [rank0]:[2023-11-14 06:36:20,668] torch.testing._internal.common_distributed: [ERROR] method(*args, **kwargs) 2023-11-14T06:44:01.1513193Z [rank0]:[2023-11-14 06:36:20,668] torch.testing._internal.common_distributed: [ERROR] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 174, in wrapper 2023-11-14T06:44:01.1513962Z [rank0]:[2023-11-14 06:36:20,668] torch.testing._internal.common_distributed: [ERROR] return func(*args, **kwargs) 2023-11-14T06:44:01.1515697Z [rank0]:[2023-11-14 06:36:20,668] torch.testing._internal.common_distributed: [ERROR] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/distributed/distributed_test.py", line 9230, in test_ddp_sync_bn_training_vs_eval 2023-11-14T06:44:01.1516529Z [rank0]:[2023-11-14 06:36:20,668] torch.testing._internal.common_distributed: [ERROR] self.assertNotEqual([], all_gather_calls) 2023-11-14T06:44:01.1518019Z [rank0]:[2023-11-14 06:36:20,668] torch.testing._internal.common_distributed: [ERROR] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3448, in assertNotEqual 2023-11-14T06:44:01.1518910Z [rank0]:[2023-11-14 06:36:20,668] torch.testing._internal.common_distributed: [ERROR] with self.assertRaises(AssertionError, msg=msg): 2023-11-14T06:44:01.1520177Z [rank0]:[2023-11-14 06:36:20,668] torch.testing._internal.common_distributed: [ERROR] File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 226, in __exit__ 2023-11-14T06:44:01.1521062Z [rank0]:[2023-11-14 06:36:20,668] torch.testing._internal.common_distributed: [ERROR] self._raiseFailure("{} not raised".format(exc_name)) 2023-11-14T06:44:01.1522238Z [rank0]:[2023-11-14 06:36:20,668] torch.testing._internal.common_distributed: [ERROR] File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 163, in _raiseFailure 2023-11-14T06:44:01.1523099Z [rank0]:[2023-11-14 06:36:20,668] torch.testing._internal.common_distributed: [ERROR] raise self.test_case.failureException(msg) 2023-11-14T06:44:01.1523923Z [rank0]:[2023-11-14 06:36:20,668] torch.testing._internal.common_distributed: [ERROR] AssertionError: AssertionError not raised 2023-11-14T06:44:01.1524470Z [rank0]:[2023-11-14 06:36:20,668] torch.testing._internal.common_distributed: [ERROR] 2023-11-14T06:44:01.1525481Z [rank0]:[2023-11-14 06:36:20,668] torch.testing._internal.common_distributed: [ERROR] To execute this test, run the following from the base repo dir: 2023-11-14T06:44:01.1526632Z [rank0]:[2023-11-14 06:36:20,668] torch.testing._internal.common_distributed: [ERROR] python test/distributed/test_distributed_spawn.py -k test_ddp_sync_bn_training_vs_eval 2023-11-14T06:44:01.1527180Z [rank0]:[2023-11-14 06:36:20,668] torch.testing._internal.common_distributed: [ERROR] 2023-11-14T06:44:01.1528223Z [rank0]:[2023-11-14 06:36:20,668] torch.testing._internal.common_distributed: [ERROR] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2023-11-14T06:44:01.1529029Z [rank0]:[2023-11-14 06:36:20,668] torch.testing._internal.common_distributed: [ERROR] exiting process 0 with exit code: 10 2023-11-14T06:44:01.1529786Z [rank1]:[2023-11-14 06:36:20,890] torch.testing._internal.common_distributed: [ERROR] Caught exception: 2023-11-14T06:44:01.1530576Z [rank1]:[2023-11-14 06:36:20,890] torch.testing._internal.common_distributed: [ERROR] Traceback (most recent call last): 2023-11-14T06:44:01.1532383Z [rank1]:[2023-11-14 06:36:20,890] torch.testing._internal.common_distributed: [ERROR] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 658, in run_test 2023-11-14T06:44:01.1533127Z [rank1]:[2023-11-14 06:36:20,890] torch.testing._internal.common_distributed: [ERROR] getattr(self, test_name)() 2023-11-14T06:44:01.1534608Z [rank1]:[2023-11-14 06:36:20,890] torch.testing._internal.common_distributed: [ERROR] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 544, in wrapper 2023-11-14T06:44:01.1535194Z [rank1]:[2023-11-14 06:36:20,890] torch.testing._internal.common_distributed: [ERROR] fn() 2023-11-14T06:44:01.1536817Z [rank1]:[2023-11-14 06:36:20,890] torch.testing._internal.common_distributed: [ERROR] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2536, in wrapper 2023-11-14T06:44:01.1537575Z [rank1]:[2023-11-14 06:36:20,890] torch.testing._internal.common_distributed: [ERROR] method(*args, **kwargs) 2023-11-14T06:44:01.1539036Z [rank1]:[2023-11-14 06:36:20,890] torch.testing._internal.common_distributed: [ERROR] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 174, in wrapper 2023-11-14T06:44:01.1539800Z [rank1]:[2023-11-14 06:36:20,890] torch.testing._internal.common_distributed: [ERROR] return func(*args, **kwargs) 2023-11-14T06:44:01.1541531Z [rank1]:[2023-11-14 06:36:20,890] torch.testing._internal.common_distributed: [ERROR] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/distributed/distributed_test.py", line 9230, in test_ddp_sync_bn_training_vs_eval 2023-11-14T06:44:01.1542388Z [rank1]:[2023-11-14 06:36:20,890] torch.testing._internal.common_distributed: [ERROR] self.assertNotEqual([], all_gather_calls) 2023-11-14T06:44:01.1544015Z [rank1]:[2023-11-14 06:36:20,890] torch.testing._internal.common_distributed: [ERROR] File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3448, in assertNotEqual 2023-11-14T06:44:01.1544907Z [rank1]:[2023-11-14 06:36:20,890] torch.testing._internal.common_distributed: [ERROR] with self.assertRaises(AssertionError, msg=msg): 2023-11-14T06:44:01.1546061Z [rank1]:[2023-11-14 06:36:20,890] torch.testing._internal.common_distributed: [ERROR] File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 226, in __exit__ 2023-11-14T06:44:01.1546944Z [rank1]:[2023-11-14 06:36:20,890] torch.testing._internal.common_distributed: [ERROR] self._raiseFailure("{} not raised".format(exc_name)) 2023-11-14T06:44:01.1548142Z [rank1]:[2023-11-14 06:36:20,890] torch.testing._internal.common_distributed: [ERROR] File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 163, in _raiseFailure 2023-11-14T06:44:01.1548991Z [rank1]:[2023-11-14 06:36:20,890] torch.testing._internal.common_distributed: [ERROR] raise self.test_case.failureException(msg) 2023-11-14T06:44:01.1549806Z [rank1]:[2023-11-14 06:36:20,890] torch.testing._internal.common_distributed: [ERROR] AssertionError: AssertionError not raised 2023-11-14T06:44:01.1550350Z [rank1]:[2023-11-14 06:36:20,890] torch.testing._internal.common_distributed: [ERROR] 2023-11-14T06:44:01.1551304Z [rank1]:[2023-11-14 06:36:20,890] torch.testing._internal.common_distributed: [ERROR] To execute this test, run the following from the base repo dir: 2023-11-14T06:44:01.1552462Z [rank1]:[2023-11-14 06:36:20,890] torch.testing._internal.common_distributed: [ERROR] python test/distributed/test_distributed_spawn.py -k test_ddp_sync_bn_training_vs_eval 2023-11-14T06:44:01.1553095Z [rank1]:[2023-11-14 06:36:20,890] torch.testing._internal.common_distributed: [ERROR] 2023-11-14T06:44:01.1554166Z [rank1]:[2023-11-14 06:36:20,890] torch.testing._internal.common_distributed: [ERROR] This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2023-11-14T06:44:01.1554976Z [rank1]:[2023-11-14 06:36:20,890] torch.testing._internal.common_distributed: [ERROR] exiting process 1 with exit code: 10 2023-11-14T06:44:01.1555235Z ('RERUN', {'yellow': True}) [6.6107s] [100%] ``` </details> test_backend_full_group <details> ``` 2023-11-14T22:51:56.4502470Z FAILED [5.2125s] distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_backend_full_group - RuntimeError: Process 0 exited with error code 10 and exception: 2023-11-14T22:51:56.4502665Z Traceback (most recent call last): 2023-11-14T22:51:56.4503603Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 658, in run_test 2023-11-14T22:51:56.4503796Z getattr(self, test_name)() 2023-11-14T22:51:56.4504710Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 544, in wrapper 2023-11-14T22:51:56.4504845Z fn() 2023-11-14T22:51:56.4505737Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2536, in wrapper 2023-11-14T22:51:56.4505896Z method(*args, **kwargs) 2023-11-14T22:51:56.4506823Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 174, in wrapper 2023-11-14T22:51:56.4506992Z return func(*args, **kwargs) 2023-11-14T22:51:56.4508285Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/distributed/distributed_test.py", line 882, in test_backend_full_group 2023-11-14T22:51:56.4508640Z self._test_group_override_backend(self._init_full_group_test) 2023-11-14T22:51:56.4509798Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/distributed/distributed_test.py", line 852, in _test_group_override_backend 2023-11-14T22:51:56.4510104Z group, group_id, rank = initializer(backend=new_backend) 2023-11-14T22:51:56.4510629Z UnboundLocalError: local variable 'new_backend' referenced before assignment 2023-11-14T22:51:56.4510650Z 2023-11-14T22:51:56.4510987Z To execute this test, run the following from the base repo dir: 2023-11-14T22:51:56.4511525Z python test/distributed/test_distributed_spawn.py -k test_backend_full_group 2023-11-14T22:51:56.4511545Z 2023-11-14T22:51:56.4511970Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2023-11-14T22:51:56.4511989Z 2023-11-14T22:51:56.4512242Z Process 1 exited with error code 10 and exception: 2023-11-14T22:51:56.4512454Z Traceback (most recent call last): 2023-11-14T22:51:56.4513380Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 658, in run_test 2023-11-14T22:51:56.4513687Z getattr(self, test_name)() 2023-11-14T22:51:56.4514612Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 544, in wrapper 2023-11-14T22:51:56.4514746Z fn() 2023-11-14T22:51:56.4515633Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2536, in wrapper 2023-11-14T22:51:56.4515791Z method(*args, **kwargs) 2023-11-14T22:51:56.4516708Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 174, in wrapper 2023-11-14T22:51:56.4516895Z return func(*args, **kwargs) 2023-11-14T22:51:56.4518008Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/distributed/distributed_test.py", line 882, in test_backend_full_group 2023-11-14T22:51:56.4518352Z self._test_group_override_backend(self._init_full_group_test) 2023-11-14T22:51:56.4519509Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/distributed/distributed_test.py", line 852, in _test_group_override_backend 2023-11-14T22:51:56.4519813Z group, group_id, rank = initializer(backend=new_backend) 2023-11-14T22:51:56.4520334Z UnboundLocalError: local variable 'new_backend' referenced before assignment 2023-11-14T22:51:56.4520355Z 2023-11-14T22:51:56.4528843Z To execute this test, run the following from the base repo dir: 2023-11-14T22:51:56.4529492Z python test/distributed/test_distributed_spawn.py -k test_backend_full_group 2023-11-14T22:51:56.4529681Z 2023-11-14T22:51:56.4530122Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2023-11-14T22:51:56.4530423Z !!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!! ``` </details> pretty sure the solution for this one is to add ucc in _test_group_override_backend https://ossci-raw-job-status.s3.amazonaws.com/log/18651430019 https://ossci-raw-job-status.s3.amazonaws.com/log/18651430132 Pull Request resolved: https://github.com/pytorch/pytorch/pull/113620 Approved by: https://github.com/huydhn
2023-11-15 21:56:10 +00:00
stepcurrent_key = f"{test_file}_{os.urandom(8).hex()}"
else:
unittest_args.extend(
[
f"--shard-id={test_module.shard}",
f"--num-shards={test_module.num_shards}",
]
)
stepcurrent_key = f"{test_file}_{test_module.shard}_{os.urandom(8).hex()}"
if options.verbose:
Add Lowering for FlexAttention Backwards (#125515) # Summary #### What does this PR do? It enables Inductor to actually generate the fused flex attention kernel for the backwards I did some other things along the way: - Abstract out the 'build_subgraph_buffer' subroutine and make it reusable between flex attention and flex_attention backwards. In total we need too build 3 subgraphs for fwd + bwd. 1 for the fwd graph and then 2 in the bwd. The FAv2 algorithm recomputes the parts of the forward (more efficiently since we already have the row_max via logsumexp), therefore we need to inline both the fwd graph and the joint graph in the bwds kernel. - The version of the backwards kernel is from a somewhat older version of the triton tutorial implementation. I think that we should update in a follow up to a newer version. Notably the blocks need to be square for this to work as currently implemented. I am sure there are many opportunities for optimization. - I didnt correctly register the decomp table + IndexMode when I landed: https://github.com/pytorch/pytorch/pull/123902, this remedies that. - The rel_bias helper func was reversed in terms of causality. I updated and then add a test specific for "future causal" attention. - This PRs but the main point that I think still needs to be worked out is the store_output call. I have it hacked up to be 'fake' but I dont think we want to land that and likely want to just have a mutated 'dq' and a stored_output 'dk' - I also needed to update the `TritonTemplateKernel` to actually accept multiple subgraphs (modifications) - I updated the benchmark to also profile bwds performance ### Benchmark Numbers: _The current implementation is not parallelizing over ctx length in the bwd_ FWD Speedups | Type | Speedup | shape | score_mod | dtype | |---------|-----------|--------------------|-------------|----------------| | Average | 0.991 | | | | | Max | 1.182 | (16, 16, 4096, 64) | noop | torch.bfloat16 | | Min | 0.796 | (2, 16, 512, 256) | head_bias | torch.bfloat16 | BWD Speedups | Type | Speedup | shape | score_mod | dtype | |---------|-----------|--------------------|-------------|----------------| | Average | 0.291 | | | | | Max | 0.652 | (8, 16, 512, 64) | head_bias | torch.bfloat16 | | Min | 0.073 | (2, 16, 4096, 128) | head_bias | torch.bfloat16 | <details> <summary>Full Data</summary> | shape | score_mod | dtype | fwd_eager_time | fwd_compiled_time | bwd_eager_time | bwd_compiled_time | fwd_speedup | bwd_speedup | |---------------------|---------------|----------------|------------------|---------------------|------------------|---------------------|---------------|---------------| | (2, 16, 512, 64) | noop | torch.bfloat16 | 19.936 | 19.092 | 57.851 | 193.564 | 1.044 | 0.299 | | (2, 16, 512, 64) | causal_mask | torch.bfloat16 | 19.955 | 19.497 | 57.662 | 206.278 | 1.024 | 0.280 | | (2, 16, 512, 64) | relative_bias | torch.bfloat16 | 19.455 | 21.297 | 57.674 | 195.219 | 0.913 | 0.295 | | (2, 16, 512, 64) | head_bias | torch.bfloat16 | 19.958 | 21.289 | 57.674 | 193.859 | 0.938 | 0.298 | | (2, 16, 512, 128) | noop | torch.bfloat16 | 28.157 | 28.615 | 82.831 | 454.211 | 0.984 | 0.182 | | (2, 16, 512, 128) | causal_mask | torch.bfloat16 | 28.154 | 28.444 | 83.091 | 432.083 | 0.990 | 0.192 | | (2, 16, 512, 128) | relative_bias | torch.bfloat16 | 28.722 | 27.897 | 83.175 | 446.789 | 1.030 | 0.186 | | (2, 16, 512, 128) | head_bias | torch.bfloat16 | 28.299 | 27.673 | 83.052 | 459.179 | 1.023 | 0.181 | | (2, 16, 512, 256) | noop | torch.bfloat16 | 41.167 | 50.504 | 175.019 | 1083.545 | 0.815 | 0.162 | | (2, 16, 512, 256) | causal_mask | torch.bfloat16 | 41.656 | 51.933 | 175.078 | 1171.176 | 0.802 | 0.149 | | (2, 16, 512, 256) | relative_bias | torch.bfloat16 | 41.697 | 50.722 | 175.159 | 1097.312 | 0.822 | 0.160 | | (2, 16, 512, 256) | head_bias | torch.bfloat16 | 41.690 | 52.387 | 175.184 | 1097.336 | 0.796 | 0.160 | | (2, 16, 1024, 64) | noop | torch.bfloat16 | 39.232 | 37.454 | 127.847 | 612.430 | 1.047 | 0.209 | | (2, 16, 1024, 64) | causal_mask | torch.bfloat16 | 39.930 | 39.599 | 127.755 | 665.359 | 1.008 | 0.192 | | (2, 16, 1024, 64) | relative_bias | torch.bfloat16 | 39.417 | 41.304 | 127.902 | 614.990 | 0.954 | 0.208 | | (2, 16, 1024, 64) | head_bias | torch.bfloat16 | 39.965 | 42.034 | 127.953 | 613.273 | 0.951 | 0.209 | | (2, 16, 1024, 128) | noop | torch.bfloat16 | 63.964 | 71.024 | 226.510 | 1637.669 | 0.901 | 0.138 | | (2, 16, 1024, 128) | causal_mask | torch.bfloat16 | 63.843 | 72.451 | 226.750 | 1558.949 | 0.881 | 0.145 | | (2, 16, 1024, 128) | relative_bias | torch.bfloat16 | 64.301 | 70.487 | 226.651 | 1610.063 | 0.912 | 0.141 | | (2, 16, 1024, 128) | head_bias | torch.bfloat16 | 64.033 | 71.394 | 226.676 | 1668.511 | 0.897 | 0.136 | | (2, 16, 1024, 256) | noop | torch.bfloat16 | 129.348 | 141.390 | 507.337 | 4405.175 | 0.915 | 0.115 | | (2, 16, 1024, 256) | causal_mask | torch.bfloat16 | 129.538 | 145.680 | 507.178 | 4768.874 | 0.889 | 0.106 | | (2, 16, 1024, 256) | relative_bias | torch.bfloat16 | 129.438 | 142.782 | 507.004 | 4401.002 | 0.907 | 0.115 | | (2, 16, 1024, 256) | head_bias | torch.bfloat16 | 129.058 | 146.242 | 507.547 | 4434.251 | 0.883 | 0.114 | | (2, 16, 4096, 64) | noop | torch.bfloat16 | 481.606 | 409.120 | 1440.890 | 14147.269 | 1.177 | 0.102 | | (2, 16, 4096, 64) | causal_mask | torch.bfloat16 | 480.227 | 438.847 | 1434.419 | 14973.386 | 1.094 | 0.096 | | (2, 16, 4096, 64) | relative_bias | torch.bfloat16 | 480.831 | 458.104 | 1432.935 | 14193.253 | 1.050 | 0.101 | | (2, 16, 4096, 64) | head_bias | torch.bfloat16 | 480.749 | 452.497 | 1437.040 | 14084.869 | 1.062 | 0.102 | | (2, 16, 4096, 128) | noop | torch.bfloat16 | 872.534 | 848.275 | 2600.895 | 35156.849 | 1.029 | 0.074 | | (2, 16, 4096, 128) | causal_mask | torch.bfloat16 | 872.647 | 868.279 | 2587.581 | 31919.531 | 1.005 | 0.081 | | (2, 16, 4096, 128) | relative_bias | torch.bfloat16 | 871.484 | 827.644 | 2593.989 | 34805.634 | 1.053 | 0.075 | | (2, 16, 4096, 128) | head_bias | torch.bfloat16 | 871.422 | 856.437 | 2602.482 | 35708.591 | 1.017 | 0.073 | | (2, 16, 4096, 256) | noop | torch.bfloat16 | 1904.497 | 1758.183 | 6122.416 | 66754.593 | 1.083 | 0.092 | | (2, 16, 4096, 256) | causal_mask | torch.bfloat16 | 1911.174 | 1762.821 | 6113.207 | 72759.392 | 1.084 | 0.084 | | (2, 16, 4096, 256) | relative_bias | torch.bfloat16 | 1911.254 | 1727.108 | 6123.530 | 66577.988 | 1.107 | 0.092 | | (2, 16, 4096, 256) | head_bias | torch.bfloat16 | 1916.977 | 1801.804 | 6118.158 | 67359.680 | 1.064 | 0.091 | | (8, 16, 512, 64) | noop | torch.bfloat16 | 44.984 | 43.974 | 170.276 | 262.259 | 1.023 | 0.649 | | (8, 16, 512, 64) | causal_mask | torch.bfloat16 | 45.001 | 46.265 | 170.509 | 274.893 | 0.973 | 0.620 | | (8, 16, 512, 64) | relative_bias | torch.bfloat16 | 45.466 | 48.211 | 170.606 | 262.759 | 0.943 | 0.649 | | (8, 16, 512, 64) | head_bias | torch.bfloat16 | 45.481 | 48.435 | 170.267 | 261.265 | 0.939 | 0.652 | | (8, 16, 512, 128) | noop | torch.bfloat16 | 72.565 | 74.736 | 313.220 | 773.126 | 0.971 | 0.405 | | (8, 16, 512, 128) | causal_mask | torch.bfloat16 | 72.015 | 75.755 | 313.311 | 775.513 | 0.951 | 0.404 | | (8, 16, 512, 128) | relative_bias | torch.bfloat16 | 72.105 | 74.189 | 313.806 | 769.238 | 0.972 | 0.408 | | (8, 16, 512, 128) | head_bias | torch.bfloat16 | 72.005 | 74.364 | 313.509 | 775.237 | 0.968 | 0.404 | | (8, 16, 512, 256) | noop | torch.bfloat16 | 138.656 | 165.453 | 663.707 | 2672.067 | 0.838 | 0.248 | | (8, 16, 512, 256) | causal_mask | torch.bfloat16 | 139.096 | 172.613 | 663.593 | 2926.538 | 0.806 | 0.227 | | (8, 16, 512, 256) | relative_bias | torch.bfloat16 | 139.500 | 168.417 | 663.938 | 2658.629 | 0.828 | 0.250 | | (8, 16, 512, 256) | head_bias | torch.bfloat16 | 139.776 | 173.549 | 662.920 | 2667.266 | 0.805 | 0.249 | | (8, 16, 1024, 64) | noop | torch.bfloat16 | 134.883 | 125.004 | 484.706 | 1195.254 | 1.079 | 0.406 | | (8, 16, 1024, 64) | causal_mask | torch.bfloat16 | 134.297 | 132.875 | 485.420 | 1234.953 | 1.011 | 0.393 | | (8, 16, 1024, 64) | relative_bias | torch.bfloat16 | 134.839 | 139.231 | 485.470 | 1198.556 | 0.968 | 0.405 | | (8, 16, 1024, 64) | head_bias | torch.bfloat16 | 133.822 | 136.449 | 485.608 | 1189.198 | 0.981 | 0.408 | | (8, 16, 1024, 128) | noop | torch.bfloat16 | 235.470 | 234.765 | 886.094 | 2662.944 | 1.003 | 0.333 | | (8, 16, 1024, 128) | causal_mask | torch.bfloat16 | 236.305 | 241.382 | 886.293 | 2646.984 | 0.979 | 0.335 | | (8, 16, 1024, 128) | relative_bias | torch.bfloat16 | 236.414 | 233.980 | 885.250 | 2642.178 | 1.010 | 0.335 | | (8, 16, 1024, 128) | head_bias | torch.bfloat16 | 237.176 | 239.040 | 885.754 | 2665.242 | 0.992 | 0.332 | | (8, 16, 1024, 256) | noop | torch.bfloat16 | 504.445 | 517.855 | 1978.956 | 9592.906 | 0.974 | 0.206 | | (8, 16, 1024, 256) | causal_mask | torch.bfloat16 | 502.428 | 536.002 | 1978.611 | 10607.342 | 0.937 | 0.187 | | (8, 16, 1024, 256) | relative_bias | torch.bfloat16 | 503.396 | 523.960 | 1977.993 | 9539.284 | 0.961 | 0.207 | | (8, 16, 1024, 256) | head_bias | torch.bfloat16 | 503.818 | 536.014 | 1980.131 | 9576.262 | 0.940 | 0.207 | | (8, 16, 4096, 64) | noop | torch.bfloat16 | 1970.139 | 1674.930 | 5750.940 | 16724.134 | 1.176 | 0.344 | | (8, 16, 4096, 64) | causal_mask | torch.bfloat16 | 1959.036 | 1775.056 | 5780.512 | 17390.350 | 1.104 | 0.332 | | (8, 16, 4096, 64) | relative_bias | torch.bfloat16 | 1947.198 | 1773.869 | 5780.643 | 16779.699 | 1.098 | 0.345 | | (8, 16, 4096, 64) | head_bias | torch.bfloat16 | 1963.935 | 1829.502 | 5780.018 | 16703.259 | 1.073 | 0.346 | | (8, 16, 4096, 128) | noop | torch.bfloat16 | 3582.711 | 3362.623 | 10436.069 | 36415.565 | 1.065 | 0.287 | | (8, 16, 4096, 128) | causal_mask | torch.bfloat16 | 3581.504 | 3499.472 | 10346.869 | 36164.959 | 1.023 | 0.286 | | (8, 16, 4096, 128) | relative_bias | torch.bfloat16 | 3589.779 | 3337.849 | 10529.621 | 36261.696 | 1.075 | 0.290 | | (8, 16, 4096, 128) | head_bias | torch.bfloat16 | 3602.265 | 3436.444 | 10458.660 | 36507.790 | 1.048 | 0.286 | | (8, 16, 4096, 256) | noop | torch.bfloat16 | 7695.923 | 7126.275 | 24643.009 | 140949.081 | 1.080 | 0.175 | | (8, 16, 4096, 256) | causal_mask | torch.bfloat16 | 7679.939 | 7186.252 | 24538.105 | 157156.067 | 1.069 | 0.156 | | (8, 16, 4096, 256) | relative_bias | torch.bfloat16 | 7681.374 | 6994.832 | 24549.713 | 140077.179 | 1.098 | 0.175 | | (8, 16, 4096, 256) | head_bias | torch.bfloat16 | 7679.822 | 7212.278 | 24627.823 | 140675.003 | 1.065 | 0.175 | | (16, 16, 512, 64) | noop | torch.bfloat16 | 80.126 | 78.291 | 333.719 | 541.165 | 1.023 | 0.617 | | (16, 16, 512, 64) | causal_mask | torch.bfloat16 | 80.065 | 81.696 | 333.779 | 551.113 | 0.980 | 0.606 | | (16, 16, 512, 64) | relative_bias | torch.bfloat16 | 80.138 | 86.715 | 333.364 | 542.118 | 0.924 | 0.615 | | (16, 16, 512, 64) | head_bias | torch.bfloat16 | 80.415 | 85.204 | 333.294 | 536.840 | 0.944 | 0.621 | | (16, 16, 512, 128) | noop | torch.bfloat16 | 134.964 | 138.025 | 607.093 | 1333.102 | 0.978 | 0.455 | | (16, 16, 512, 128) | causal_mask | torch.bfloat16 | 134.192 | 141.523 | 606.269 | 1424.318 | 0.948 | 0.426 | | (16, 16, 512, 128) | relative_bias | torch.bfloat16 | 135.711 | 138.639 | 606.283 | 1327.974 | 0.979 | 0.457 | | (16, 16, 512, 128) | head_bias | torch.bfloat16 | 135.552 | 140.555 | 607.107 | 1347.370 | 0.964 | 0.451 | | (16, 16, 512, 256) | noop | torch.bfloat16 | 275.113 | 315.144 | 1301.583 | 5268.153 | 0.873 | 0.247 | | (16, 16, 512, 256) | causal_mask | torch.bfloat16 | 274.867 | 328.106 | 1302.513 | 5770.594 | 0.838 | 0.226 | | (16, 16, 512, 256) | relative_bias | torch.bfloat16 | 276.052 | 321.770 | 1302.904 | 5241.920 | 0.858 | 0.249 | | (16, 16, 512, 256) | head_bias | torch.bfloat16 | 271.409 | 328.839 | 1302.142 | 5266.037 | 0.825 | 0.247 | | (16, 16, 1024, 64) | noop | torch.bfloat16 | 260.489 | 237.463 | 955.884 | 1817.558 | 1.097 | 0.526 | | (16, 16, 1024, 64) | causal_mask | torch.bfloat16 | 262.378 | 254.350 | 955.280 | 1843.807 | 1.032 | 0.518 | | (16, 16, 1024, 64) | relative_bias | torch.bfloat16 | 261.338 | 268.253 | 956.038 | 1820.036 | 0.974 | 0.525 | | (16, 16, 1024, 64) | head_bias | torch.bfloat16 | 262.153 | 264.156 | 956.023 | 1810.076 | 0.992 | 0.528 | | (16, 16, 1024, 128) | noop | torch.bfloat16 | 476.475 | 461.413 | 1760.578 | 4306.521 | 1.033 | 0.409 | | (16, 16, 1024, 128) | causal_mask | torch.bfloat16 | 473.794 | 479.178 | 1761.277 | 4619.439 | 0.989 | 0.381 | | (16, 16, 1024, 128) | relative_bias | torch.bfloat16 | 473.839 | 463.282 | 1758.692 | 4290.562 | 1.023 | 0.410 | | (16, 16, 1024, 128) | head_bias | torch.bfloat16 | 472.979 | 472.896 | 1763.086 | 4367.931 | 1.000 | 0.404 | | (16, 16, 1024, 256) | noop | torch.bfloat16 | 1014.184 | 1026.764 | 3922.997 | 19104.147 | 0.988 | 0.205 | | (16, 16, 1024, 256) | causal_mask | torch.bfloat16 | 1013.217 | 1039.046 | 3928.382 | 21086.281 | 0.975 | 0.186 | | (16, 16, 1024, 256) | relative_bias | torch.bfloat16 | 1008.519 | 1015.278 | 3922.133 | 18980.652 | 0.993 | 0.207 | | (16, 16, 1024, 256) | head_bias | torch.bfloat16 | 1011.360 | 1047.542 | 3931.245 | 19069.172 | 0.965 | 0.206 | | (16, 16, 4096, 64) | noop | torch.bfloat16 | 3929.850 | 3325.667 | 11411.704 | 23344.280 | 1.182 | 0.489 | | (16, 16, 4096, 64) | causal_mask | torch.bfloat16 | 3885.262 | 3581.544 | 11390.515 | 23725.639 | 1.085 | 0.480 | | (16, 16, 4096, 64) | relative_bias | torch.bfloat16 | 3865.737 | 3537.308 | 11489.901 | 23406.330 | 1.093 | 0.491 | | (16, 16, 4096, 64) | head_bias | torch.bfloat16 | 3880.530 | 3665.249 | 11484.411 | 23299.496 | 1.059 | 0.493 | | (16, 16, 4096, 128) | noop | torch.bfloat16 | 7030.306 | 6745.715 | 20621.264 | 57464.096 | 1.042 | 0.359 | | (16, 16, 4096, 128) | causal_mask | torch.bfloat16 | 7095.414 | 7034.385 | 20410.656 | 61660.511 | 1.009 | 0.331 | | (16, 16, 4096, 128) | relative_bias | torch.bfloat16 | 7084.779 | 6686.497 | 20315.161 | 57243.969 | 1.060 | 0.355 | | (16, 16, 4096, 128) | head_bias | torch.bfloat16 | 7075.367 | 6863.305 | 20494.385 | 58481.953 | 1.031 | 0.350 | | (16, 16, 4096, 256) | noop | torch.bfloat16 | 15612.741 | 14297.482 | 55306.847 | 281161.865 | 1.092 | 0.197 | | (16, 16, 4096, 256) | causal_mask | torch.bfloat16 | 15326.592 | 14263.878 | 55227.806 | 313063.232 | 1.075 | 0.176 | | (16, 16, 4096, 256) | relative_bias | torch.bfloat16 | 15297.963 | 14007.379 | 54558.029 | 279529.175 | 1.092 | 0.195 | | (16, 16, 4096, 256) | head_bias | torch.bfloat16 | 15216.160 | 14276.027 | 55081.581 | 280996.826 | 1.066 | 0.196 | </details> Pull Request resolved: https://github.com/pytorch/pytorch/pull/125515 Approved by: https://github.com/Chillee
2024-05-17 00:41:55 +00:00
unittest_args.append(f'-{"v" * options.verbose}') # in case of pytest
Discover and run C++ tests with run_test.py (#99559) This depends on [pytest-cpp](https://github.com/pytest-dev/pytest-cpp) to discover and run C++ tests with pytest. C++ tests are built under `${WORKSPACE}/build/bin` directory and copied to the test job under the same path. * To expose them to `run_test`, I choose to use the mock path prefix `cpp`, for example `build/bin/c10_Array_test` would be named as `cpp/c10_Array_test` and the `python test/run_test.py --cpp -i cpp/c10_Array_test` would run the test in the same way as other Python tests. I could copy them from `build/bin` to `test/cpp`, but it will be mixed with the source code and CMake file. So this looks easier * Some executable under `build/bin` are not C++ tests, and they are exclude, for example `build/bin/torch_shm_manager` * C++ tests need to run with pytest directly as python command doesn't understand it * The change is gated by the new `--cpp` argument to `run_test.py`, for example `python test/run_test.py --cpp` will run all available C++ tests * The tests can be run in parallel * Failing tests can be retried with `--reruns=2` and `--sw` ``` ============================= test session starts ============================== platform darwin -- Python 3.9.15, pytest-7.2.0, pluggy-1.0.0 -- /Users/huydo/miniconda3/envs/py3.9/bin/python3 cachedir: .pytest_cache hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/Users/huydo/Storage/mine/pytorch/test/.hypothesis/examples') rootdir: /Users/huydo/Storage/mine/pytorch, configfile: pytest.ini plugins: xdoctest-1.1.0, cpp-2.3.0, rerunfailures-10.3, shard-0.1.2, flakefinder-1.1.0, hypothesis-6.56.4, xdist-3.0.2, repeat-0.9.1 collecting ... collected 3 items / 2 deselected / 1 selected Running 1 items in this shard: build/bin/scalar_tensor_test::TestScalarTensor.TestScalarTensorMPS stepwise: skipping 2 already passed items. ../build/bin/scalar_tensor_test::TestScalarTensor::TestScalarTensorMPS RERUN [100%] ../build/bin/scalar_tensor_test::TestScalarTensor::TestScalarTensorMPS RERUN [100%] ../build/bin/scalar_tensor_test::TestScalarTensor::TestScalarTensorMPS FAILED [100%] ``` * `--import-slow-tests` and `--import-disabled-tests` won't work for now and that's ok to have it as a future task. I also add `pytest-cpp==2.3.0` to Linux Docker, MacOS, and Windows. ### Testing Build PyTorch and run `python test/run_test.py --cpp` on my laptop. CI change would come later in a separate PR. Also running `python test/run_test.py --help` now shows all C++ test discovered under `build/bin` Pull Request resolved: https://github.com/pytorch/pytorch/pull/99559 Approved by: https://github.com/clee2000
2023-04-22 00:23:31 +00:00
if test_file in RUN_PARALLEL_BLOCKLIST:
unittest_args = [
arg for arg in unittest_args if not arg.startswith("--run-parallel")
]
Discover and run C++ tests with run_test.py (#99559) This depends on [pytest-cpp](https://github.com/pytest-dev/pytest-cpp) to discover and run C++ tests with pytest. C++ tests are built under `${WORKSPACE}/build/bin` directory and copied to the test job under the same path. * To expose them to `run_test`, I choose to use the mock path prefix `cpp`, for example `build/bin/c10_Array_test` would be named as `cpp/c10_Array_test` and the `python test/run_test.py --cpp -i cpp/c10_Array_test` would run the test in the same way as other Python tests. I could copy them from `build/bin` to `test/cpp`, but it will be mixed with the source code and CMake file. So this looks easier * Some executable under `build/bin` are not C++ tests, and they are exclude, for example `build/bin/torch_shm_manager` * C++ tests need to run with pytest directly as python command doesn't understand it * The change is gated by the new `--cpp` argument to `run_test.py`, for example `python test/run_test.py --cpp` will run all available C++ tests * The tests can be run in parallel * Failing tests can be retried with `--reruns=2` and `--sw` ``` ============================= test session starts ============================== platform darwin -- Python 3.9.15, pytest-7.2.0, pluggy-1.0.0 -- /Users/huydo/miniconda3/envs/py3.9/bin/python3 cachedir: .pytest_cache hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/Users/huydo/Storage/mine/pytorch/test/.hypothesis/examples') rootdir: /Users/huydo/Storage/mine/pytorch, configfile: pytest.ini plugins: xdoctest-1.1.0, cpp-2.3.0, rerunfailures-10.3, shard-0.1.2, flakefinder-1.1.0, hypothesis-6.56.4, xdist-3.0.2, repeat-0.9.1 collecting ... collected 3 items / 2 deselected / 1 selected Running 1 items in this shard: build/bin/scalar_tensor_test::TestScalarTensor.TestScalarTensorMPS stepwise: skipping 2 already passed items. ../build/bin/scalar_tensor_test::TestScalarTensor::TestScalarTensorMPS RERUN [100%] ../build/bin/scalar_tensor_test::TestScalarTensor::TestScalarTensorMPS RERUN [100%] ../build/bin/scalar_tensor_test::TestScalarTensor::TestScalarTensorMPS FAILED [100%] ``` * `--import-slow-tests` and `--import-disabled-tests` won't work for now and that's ok to have it as a future task. I also add `pytest-cpp==2.3.0` to Linux Docker, MacOS, and Windows. ### Testing Build PyTorch and run `python test/run_test.py --cpp` on my laptop. CI change would come later in a separate PR. Also running `python test/run_test.py --help` now shows all C++ test discovered under `build/bin` Pull Request resolved: https://github.com/pytorch/pytorch/pull/99559 Approved by: https://github.com/clee2000
2023-04-22 00:23:31 +00:00
if extra_unittest_args:
assert isinstance(extra_unittest_args, list)
unittest_args.extend(extra_unittest_args)
# If using pytest, replace -f with equivalent -x
if options.pytest:
unittest_args.extend(
Do not collect and skip non-disabled tests when rerunning disabled tests (#102107) The console log blows up to much when running in rerun disabled tests mode (x50) https://hud.pytorch.org/pytorch/pytorch/commit/e132f09e8878418fb98a4b76a441a324452354ec. Each log is around 1GB and the whole uncompressed logs is ~50GB. After compression, it will be around 1GB, still too big. The increase comes mainly from the multiple SKIPPED message for non-disabled tests, which is expected due to how SkipTest and pytest-flakyfinder currently work. I update `test/conftest.py` to completely ignore skipped tests when rerunning disabled test instead of collecting then skipping 50 tests each. The benefit of doing is is much more than I originally expect: * Rerun disabled tests jobs now finish in less than half an hour as they should be * Fix OOM runner crash because of too many collected tests * Fix verbosity issue as now only disabled tests are run x50 times. There are only few hundreds of them atm * Fix timed out issue when rerunning disabled distributed and ASAN tests. They are just too slow when running at x50 ### Testing When rerunning disabled tests https://github.com/pytorch/pytorch/actions/runs/5084508614, only disabled tests on the platform are run, for example `test_ops_jit` on https://ossci-raw-job-status.s3.amazonaws.com/log/13770164954 only ran 100 tests (`test_variant_consistency_jit_linalg_lu_cuda_float32` + `test_variant_consistency_jit_linalg_lu_factor_cuda_complex64`) x50. ``` Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'test_ops_jit.py', '--shard-id=1', '--num-shards=2', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '--sc=test_ops_jit_1', '--flake-finder', '--flake-runs=50', '--import-slow-tests', '--import-disabled-tests', '--rerun-disabled-tests'] ... [2023-05-25 21:32:49.763856] Expand the folded group to see the log file of test_ops_jit 2/2 ##[group]PRINTING LOG FILE of test_ops_jit 2/2 (/var/lib/jenkins/workspace/test/test-reports/test_ops_jit_h2wr_t2c.log) Test results will be stored in test-reports/python-pytest/test_ops_jit/test_ops_jit-51a83bd44549074e.xml ============================= test session starts ============================== platform linux -- Python 3.10.11, pytest-7.3.1, pluggy-1.0.0 -- /opt/conda/envs/py_3.10/bin/python cachedir: .pytest_cache hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow] rootdir: /var/lib/jenkins/workspace configfile: pytest.ini plugins: hypothesis-5.35.1, cpp-2.3.0, flakefinder-1.1.0, rerunfailures-11.1.2, shard-0.1.2, xdist-3.3.0, xdoctest-1.1.0 collecting ... collected 1084 items Running 100 items in this shard: test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_linalg_lu_cuda_float32 (x50), test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_linalg_lu_factor_cuda_complex64 (x50) stepcurrent: Cannot find last run test, not skipping test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_linalg_lu_cuda_float32 PASSED [2.1876s] [ 1%] test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_linalg_lu_factor_cuda_complex64 PASSED [4.5615s] [ 2%] ``` * [pull](https://github.com/pytorch/pytorch/actions/runs/5093566864) * [trunk](https://github.com/pytorch/pytorch/actions/runs/5095364311) * [periodic](https://github.com/pytorch/pytorch/actions/runs/5095378850) * [slow](https://github.com/pytorch/pytorch/actions/runs/5095390285) Pull Request resolved: https://github.com/pytorch/pytorch/pull/102107 Approved by: https://github.com/clee2000, https://github.com/malfet
2023-05-27 12:10:32 +00:00
get_pytest_args(
options,
is_cpp_test=is_cpp_test,
is_distributed_test=is_distributed_test,
)
)
unittest_args.extend(test_module.get_pytest_args())
[BE][tests] show local variables on failure in tests (#131151) ------ As per the title, add argument `--locals` for `unittest` and `--showlocals --tb=long` for `pytest` in CI. Some failures cannot be reproduced on the local machine but exist on cloud CI. This change allows us to investigate the test failure more easily. Example output: https://github.com/pytorch/pytorch/actions/runs/9961546996/job/27523888353?pr=130710#step:20:3361 ```text /opt/conda/envs/py_3.8/lib/python3.8/site-packages/sympy/core/function.py:307: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ cls = FloorDiv, base = -1.00000000000000, divisor = -1.00000000000000 @classmethod def eval(cls, base, divisor): # python test/test_dynamic_shapes.py -k TestDimConstraints.test_dim_constraints_solve_full # Assert triggered by inequality solver # assert base.is_integer, base # assert divisor.is_integer, divisor # We don't provide the same error message as in Python because SymPy # makes it difficult to check the types. if divisor.is_zero: raise ZeroDivisionError("division by zero") if base in (int_oo, -int_oo, sympy.oo, -sympy.oo) and divisor in ( int_oo, -int_oo, sympy.oo, -sympy.oo, ): return sympy.nan if base is sympy.nan or divisor is sympy.nan: return sympy.nan if base.is_zero: return sympy.S.Zero if base.is_integer and divisor == 1: return base if base.is_integer and divisor == -1: return sympy.Mul(base, -1) if ( isinstance(base, sympy.Number) and isinstance(divisor, sympy.Number) and ( base in (int_oo, -int_oo, sympy.oo, -sympy.oo) or divisor in (int_oo, -int_oo, sympy.oo, -sympy.oo) ) ): r = float(base) / float(divisor) if r == math.inf: return int_oo elif r == -math.inf: return -int_oo elif math.isnan(r): return sympy.nan else: return sympy.Integer(math.floor(r)) if isinstance(base, sympy.Integer) and isinstance(divisor, sympy.Integer): return sympy.Integer(int(base) // int(divisor)) if isinstance(base, FloorDiv): return FloorDiv(base.args[0], base.args[1] * divisor) # Expands (x + y) // b into x // b + y // b. # This only works if floor is an identity, i.e. x / b is an integer. for term in sympy.Add.make_args(base): quotient = term / divisor if quotient.is_integer and isinstance(divisor, sympy.Integer): # NB: this is correct even if the divisor is not an integer, but it # creates rational expressions that cause problems with dynamic # shapes. return FloorDiv(base - term, divisor) + quotient try: gcd = sympy.gcd(base, divisor) if gcd != 1: > return FloorDiv( sympy.simplify(base / gcd), sympy.simplify(divisor / gcd) ) base = -1.00000000000000 cls = FloorDiv divisor = -1.00000000000000 gcd = 1.00000000000000 quotient = 1.00000000000000 term = -1.00000000000000 /opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/utils/_sympy/functions.py:159: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ args = (FloorDiv, -1.00000000000000, -1.00000000000000), kwargs = {} @wraps(func) def wrapper(*args, **kwargs): try: > retval = cfunc(*args, **kwargs) E RecursionError: maximum recursion depth exceeded in comparison E E To execute this test, run the following from the base repo dir: E python test/test_sympy_utils.py -k TestValueRanges.test_binary_ref_fn_floordiv_dtype_float E E This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 args = (FloorDiv, -1.00000000000000, -1.00000000000000) cfunc = <functools._lru_cache_wrapper object at 0x7fc5303173a0> func = <function Function.__new__ at 0x7fc530317280> kwargs = {} ``` Pull Request resolved: https://github.com/pytorch/pytorch/pull/131151 Approved by: https://github.com/ezyang
2024-07-29 16:04:26 +00:00
replacement = {"-f": "-x"}
unittest_args = [replacement.get(arg, arg) for arg in unittest_args]
if options.showlocals:
if options.pytest:
unittest_args.extend(["--showlocals", "--tb=long", "--color=yes"])
else:
unittest_args.append("--locals")
Discover and run C++ tests with run_test.py (#99559) This depends on [pytest-cpp](https://github.com/pytest-dev/pytest-cpp) to discover and run C++ tests with pytest. C++ tests are built under `${WORKSPACE}/build/bin` directory and copied to the test job under the same path. * To expose them to `run_test`, I choose to use the mock path prefix `cpp`, for example `build/bin/c10_Array_test` would be named as `cpp/c10_Array_test` and the `python test/run_test.py --cpp -i cpp/c10_Array_test` would run the test in the same way as other Python tests. I could copy them from `build/bin` to `test/cpp`, but it will be mixed with the source code and CMake file. So this looks easier * Some executable under `build/bin` are not C++ tests, and they are exclude, for example `build/bin/torch_shm_manager` * C++ tests need to run with pytest directly as python command doesn't understand it * The change is gated by the new `--cpp` argument to `run_test.py`, for example `python test/run_test.py --cpp` will run all available C++ tests * The tests can be run in parallel * Failing tests can be retried with `--reruns=2` and `--sw` ``` ============================= test session starts ============================== platform darwin -- Python 3.9.15, pytest-7.2.0, pluggy-1.0.0 -- /Users/huydo/miniconda3/envs/py3.9/bin/python3 cachedir: .pytest_cache hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/Users/huydo/Storage/mine/pytorch/test/.hypothesis/examples') rootdir: /Users/huydo/Storage/mine/pytorch, configfile: pytest.ini plugins: xdoctest-1.1.0, cpp-2.3.0, rerunfailures-10.3, shard-0.1.2, flakefinder-1.1.0, hypothesis-6.56.4, xdist-3.0.2, repeat-0.9.1 collecting ... collected 3 items / 2 deselected / 1 selected Running 1 items in this shard: build/bin/scalar_tensor_test::TestScalarTensor.TestScalarTensorMPS stepwise: skipping 2 already passed items. ../build/bin/scalar_tensor_test::TestScalarTensor::TestScalarTensorMPS RERUN [100%] ../build/bin/scalar_tensor_test::TestScalarTensor::TestScalarTensorMPS RERUN [100%] ../build/bin/scalar_tensor_test::TestScalarTensor::TestScalarTensorMPS FAILED [100%] ``` * `--import-slow-tests` and `--import-disabled-tests` won't work for now and that's ok to have it as a future task. I also add `pytest-cpp==2.3.0` to Linux Docker, MacOS, and Windows. ### Testing Build PyTorch and run `python test/run_test.py --cpp` on my laptop. CI change would come later in a separate PR. Also running `python test/run_test.py --help` now shows all C++ test discovered under `build/bin` Pull Request resolved: https://github.com/pytorch/pytorch/pull/99559 Approved by: https://github.com/clee2000
2023-04-22 00:23:31 +00:00
# NB: These features are not available for C++ tests, but there is little incentive
# to implement it because we have never seen a flaky C++ test before.
Discover and run C++ tests with run_test.py (#99559) This depends on [pytest-cpp](https://github.com/pytest-dev/pytest-cpp) to discover and run C++ tests with pytest. C++ tests are built under `${WORKSPACE}/build/bin` directory and copied to the test job under the same path. * To expose them to `run_test`, I choose to use the mock path prefix `cpp`, for example `build/bin/c10_Array_test` would be named as `cpp/c10_Array_test` and the `python test/run_test.py --cpp -i cpp/c10_Array_test` would run the test in the same way as other Python tests. I could copy them from `build/bin` to `test/cpp`, but it will be mixed with the source code and CMake file. So this looks easier * Some executable under `build/bin` are not C++ tests, and they are exclude, for example `build/bin/torch_shm_manager` * C++ tests need to run with pytest directly as python command doesn't understand it * The change is gated by the new `--cpp` argument to `run_test.py`, for example `python test/run_test.py --cpp` will run all available C++ tests * The tests can be run in parallel * Failing tests can be retried with `--reruns=2` and `--sw` ``` ============================= test session starts ============================== platform darwin -- Python 3.9.15, pytest-7.2.0, pluggy-1.0.0 -- /Users/huydo/miniconda3/envs/py3.9/bin/python3 cachedir: .pytest_cache hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/Users/huydo/Storage/mine/pytorch/test/.hypothesis/examples') rootdir: /Users/huydo/Storage/mine/pytorch, configfile: pytest.ini plugins: xdoctest-1.1.0, cpp-2.3.0, rerunfailures-10.3, shard-0.1.2, flakefinder-1.1.0, hypothesis-6.56.4, xdist-3.0.2, repeat-0.9.1 collecting ... collected 3 items / 2 deselected / 1 selected Running 1 items in this shard: build/bin/scalar_tensor_test::TestScalarTensor.TestScalarTensorMPS stepwise: skipping 2 already passed items. ../build/bin/scalar_tensor_test::TestScalarTensor::TestScalarTensorMPS RERUN [100%] ../build/bin/scalar_tensor_test::TestScalarTensor::TestScalarTensorMPS RERUN [100%] ../build/bin/scalar_tensor_test::TestScalarTensor::TestScalarTensorMPS FAILED [100%] ``` * `--import-slow-tests` and `--import-disabled-tests` won't work for now and that's ok to have it as a future task. I also add `pytest-cpp==2.3.0` to Linux Docker, MacOS, and Windows. ### Testing Build PyTorch and run `python test/run_test.py --cpp` on my laptop. CI change would come later in a separate PR. Also running `python test/run_test.py --help` now shows all C++ test discovered under `build/bin` Pull Request resolved: https://github.com/pytorch/pytorch/pull/99559 Approved by: https://github.com/clee2000
2023-04-22 00:23:31 +00:00
if IS_CI and not is_cpp_test:
Add a mode to rerun all disabled tests (without running anything else) (#88646) Rerun all disabled test to gather their latest result so that we can close disabled tickets automatically. When running under this mode (RERUN_DISABLED_TESTS=true), only disabled tests are run while the rest are skipped `<skipped message="Test is enabled but --rerun-disabled-tests verification mode is set, so only disabled tests are run" type="skip"/>` The logic is roughly as follows, the test runs multiple times (n=50) * If the disabled test passes, and it's flaky, do nothing because it's still flaky. In the test report, we'll see the test passes with the following skipped message: ``` <testcase classname="TestMultiprocessing" file="test_multiprocessing.py" line="357" name="test_fs" time="0.000" timestamp="0001-01-01T00:00:00"> <skipped message="{&quot;flaky&quot;: True, &quot;num_red&quot;: 4, &quot;num_green&quot;: 0, &quot;max_num_retries&quot;: 3, &quot;rerun_disabled_test&quot;: true}" type="skip"/> </testcase> ``` * If the disabled test passes every single time, and it is not flaky anymore, mark it so that it can be closed later. We will see the test runs and passes, i.e. ``` <testcase classname="TestCommonCUDA" name="test_out_warning_linalg_lu_factor_cuda" time="0.170" file="test_ops.py" /> ``` * If the disabled test fails after all retries, this is also expected. So only report this but don't fail the job (because we don't care about red signals here), we'll see the test is skipped (without the `flaky` field), i.e. ``` <testcase classname="TestMultiprocessing" file="test_multiprocessing.py" line="357" name="test_fs" time="0.000" timestamp="0001-01-01T00:00:00"> <skipped message="{&quot;num_red&quot;: 4, &quot;num_green&quot;: 0, &quot;max_num_retries&quot;: 3, &quot;rerun_disabled_test&quot;: true}" type="skip"/> </testcase> ``` This runs at the same schedule as `mem_leak_check` (daily). The change to update test stats, and (potentially) grouping on HUD will come in separated PRs. ### Testing * pull https://github.com/pytorch/pytorch/actions/runs/3447434434 * trunk https://github.com/pytorch/pytorch/actions/runs/3447434928 Pull Request resolved: https://github.com/pytorch/pytorch/pull/88646 Approved by: https://github.com/clee2000
2022-11-15 05:08:26 +00:00
ci_args = ["--import-slow-tests", "--import-disabled-tests"]
if RERUN_DISABLED_TESTS:
Add a mode to rerun all disabled tests (without running anything else) (#88646) Rerun all disabled test to gather their latest result so that we can close disabled tickets automatically. When running under this mode (RERUN_DISABLED_TESTS=true), only disabled tests are run while the rest are skipped `<skipped message="Test is enabled but --rerun-disabled-tests verification mode is set, so only disabled tests are run" type="skip"/>` The logic is roughly as follows, the test runs multiple times (n=50) * If the disabled test passes, and it's flaky, do nothing because it's still flaky. In the test report, we'll see the test passes with the following skipped message: ``` <testcase classname="TestMultiprocessing" file="test_multiprocessing.py" line="357" name="test_fs" time="0.000" timestamp="0001-01-01T00:00:00"> <skipped message="{&quot;flaky&quot;: True, &quot;num_red&quot;: 4, &quot;num_green&quot;: 0, &quot;max_num_retries&quot;: 3, &quot;rerun_disabled_test&quot;: true}" type="skip"/> </testcase> ``` * If the disabled test passes every single time, and it is not flaky anymore, mark it so that it can be closed later. We will see the test runs and passes, i.e. ``` <testcase classname="TestCommonCUDA" name="test_out_warning_linalg_lu_factor_cuda" time="0.170" file="test_ops.py" /> ``` * If the disabled test fails after all retries, this is also expected. So only report this but don't fail the job (because we don't care about red signals here), we'll see the test is skipped (without the `flaky` field), i.e. ``` <testcase classname="TestMultiprocessing" file="test_multiprocessing.py" line="357" name="test_fs" time="0.000" timestamp="0001-01-01T00:00:00"> <skipped message="{&quot;num_red&quot;: 4, &quot;num_green&quot;: 0, &quot;max_num_retries&quot;: 3, &quot;rerun_disabled_test&quot;: true}" type="skip"/> </testcase> ``` This runs at the same schedule as `mem_leak_check` (daily). The change to update test stats, and (potentially) grouping on HUD will come in separated PRs. ### Testing * pull https://github.com/pytorch/pytorch/actions/runs/3447434434 * trunk https://github.com/pytorch/pytorch/actions/runs/3447434928 Pull Request resolved: https://github.com/pytorch/pytorch/pull/88646 Approved by: https://github.com/clee2000
2022-11-15 05:08:26 +00:00
ci_args.append("--rerun-disabled-tests")
# use the downloaded test cases configuration, not supported in pytest
Add a mode to rerun all disabled tests (without running anything else) (#88646) Rerun all disabled test to gather their latest result so that we can close disabled tickets automatically. When running under this mode (RERUN_DISABLED_TESTS=true), only disabled tests are run while the rest are skipped `<skipped message="Test is enabled but --rerun-disabled-tests verification mode is set, so only disabled tests are run" type="skip"/>` The logic is roughly as follows, the test runs multiple times (n=50) * If the disabled test passes, and it's flaky, do nothing because it's still flaky. In the test report, we'll see the test passes with the following skipped message: ``` <testcase classname="TestMultiprocessing" file="test_multiprocessing.py" line="357" name="test_fs" time="0.000" timestamp="0001-01-01T00:00:00"> <skipped message="{&quot;flaky&quot;: True, &quot;num_red&quot;: 4, &quot;num_green&quot;: 0, &quot;max_num_retries&quot;: 3, &quot;rerun_disabled_test&quot;: true}" type="skip"/> </testcase> ``` * If the disabled test passes every single time, and it is not flaky anymore, mark it so that it can be closed later. We will see the test runs and passes, i.e. ``` <testcase classname="TestCommonCUDA" name="test_out_warning_linalg_lu_factor_cuda" time="0.170" file="test_ops.py" /> ``` * If the disabled test fails after all retries, this is also expected. So only report this but don't fail the job (because we don't care about red signals here), we'll see the test is skipped (without the `flaky` field), i.e. ``` <testcase classname="TestMultiprocessing" file="test_multiprocessing.py" line="357" name="test_fs" time="0.000" timestamp="0001-01-01T00:00:00"> <skipped message="{&quot;num_red&quot;: 4, &quot;num_green&quot;: 0, &quot;max_num_retries&quot;: 3, &quot;rerun_disabled_test&quot;: true}" type="skip"/> </testcase> ``` This runs at the same schedule as `mem_leak_check` (daily). The change to update test stats, and (potentially) grouping on HUD will come in separated PRs. ### Testing * pull https://github.com/pytorch/pytorch/actions/runs/3447434434 * trunk https://github.com/pytorch/pytorch/actions/runs/3447434928 Pull Request resolved: https://github.com/pytorch/pytorch/pull/88646 Approved by: https://github.com/clee2000
2022-11-15 05:08:26 +00:00
unittest_args.extend(ci_args)
Discover and run C++ tests with run_test.py (#99559) This depends on [pytest-cpp](https://github.com/pytest-dev/pytest-cpp) to discover and run C++ tests with pytest. C++ tests are built under `${WORKSPACE}/build/bin` directory and copied to the test job under the same path. * To expose them to `run_test`, I choose to use the mock path prefix `cpp`, for example `build/bin/c10_Array_test` would be named as `cpp/c10_Array_test` and the `python test/run_test.py --cpp -i cpp/c10_Array_test` would run the test in the same way as other Python tests. I could copy them from `build/bin` to `test/cpp`, but it will be mixed with the source code and CMake file. So this looks easier * Some executable under `build/bin` are not C++ tests, and they are exclude, for example `build/bin/torch_shm_manager` * C++ tests need to run with pytest directly as python command doesn't understand it * The change is gated by the new `--cpp` argument to `run_test.py`, for example `python test/run_test.py --cpp` will run all available C++ tests * The tests can be run in parallel * Failing tests can be retried with `--reruns=2` and `--sw` ``` ============================= test session starts ============================== platform darwin -- Python 3.9.15, pytest-7.2.0, pluggy-1.0.0 -- /Users/huydo/miniconda3/envs/py3.9/bin/python3 cachedir: .pytest_cache hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/Users/huydo/Storage/mine/pytorch/test/.hypothesis/examples') rootdir: /Users/huydo/Storage/mine/pytorch, configfile: pytest.ini plugins: xdoctest-1.1.0, cpp-2.3.0, rerunfailures-10.3, shard-0.1.2, flakefinder-1.1.0, hypothesis-6.56.4, xdist-3.0.2, repeat-0.9.1 collecting ... collected 3 items / 2 deselected / 1 selected Running 1 items in this shard: build/bin/scalar_tensor_test::TestScalarTensor.TestScalarTensorMPS stepwise: skipping 2 already passed items. ../build/bin/scalar_tensor_test::TestScalarTensor::TestScalarTensorMPS RERUN [100%] ../build/bin/scalar_tensor_test::TestScalarTensor::TestScalarTensorMPS RERUN [100%] ../build/bin/scalar_tensor_test::TestScalarTensor::TestScalarTensorMPS FAILED [100%] ``` * `--import-slow-tests` and `--import-disabled-tests` won't work for now and that's ok to have it as a future task. I also add `pytest-cpp==2.3.0` to Linux Docker, MacOS, and Windows. ### Testing Build PyTorch and run `python test/run_test.py --cpp` on my laptop. CI change would come later in a separate PR. Also running `python test/run_test.py --help` now shows all C++ test discovered under `build/bin` Pull Request resolved: https://github.com/pytorch/pytorch/pull/99559 Approved by: https://github.com/clee2000
2023-04-22 00:23:31 +00:00
if test_file in PYTEST_SKIP_RETRIES:
if not options.pytest:
raise RuntimeError(
"A test running without pytest cannot skip retries using "
"the PYTEST_SKIP_RETRIES set."
)
unittest_args = [arg for arg in unittest_args if "--reruns" not in arg]
# Extra arguments are not supported with pytest
Discover and run C++ tests with run_test.py (#99559) This depends on [pytest-cpp](https://github.com/pytest-dev/pytest-cpp) to discover and run C++ tests with pytest. C++ tests are built under `${WORKSPACE}/build/bin` directory and copied to the test job under the same path. * To expose them to `run_test`, I choose to use the mock path prefix `cpp`, for example `build/bin/c10_Array_test` would be named as `cpp/c10_Array_test` and the `python test/run_test.py --cpp -i cpp/c10_Array_test` would run the test in the same way as other Python tests. I could copy them from `build/bin` to `test/cpp`, but it will be mixed with the source code and CMake file. So this looks easier * Some executable under `build/bin` are not C++ tests, and they are exclude, for example `build/bin/torch_shm_manager` * C++ tests need to run with pytest directly as python command doesn't understand it * The change is gated by the new `--cpp` argument to `run_test.py`, for example `python test/run_test.py --cpp` will run all available C++ tests * The tests can be run in parallel * Failing tests can be retried with `--reruns=2` and `--sw` ``` ============================= test session starts ============================== platform darwin -- Python 3.9.15, pytest-7.2.0, pluggy-1.0.0 -- /Users/huydo/miniconda3/envs/py3.9/bin/python3 cachedir: .pytest_cache hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/Users/huydo/Storage/mine/pytorch/test/.hypothesis/examples') rootdir: /Users/huydo/Storage/mine/pytorch, configfile: pytest.ini plugins: xdoctest-1.1.0, cpp-2.3.0, rerunfailures-10.3, shard-0.1.2, flakefinder-1.1.0, hypothesis-6.56.4, xdist-3.0.2, repeat-0.9.1 collecting ... collected 3 items / 2 deselected / 1 selected Running 1 items in this shard: build/bin/scalar_tensor_test::TestScalarTensor.TestScalarTensorMPS stepwise: skipping 2 already passed items. ../build/bin/scalar_tensor_test::TestScalarTensor::TestScalarTensorMPS RERUN [100%] ../build/bin/scalar_tensor_test::TestScalarTensor::TestScalarTensorMPS RERUN [100%] ../build/bin/scalar_tensor_test::TestScalarTensor::TestScalarTensorMPS FAILED [100%] ``` * `--import-slow-tests` and `--import-disabled-tests` won't work for now and that's ok to have it as a future task. I also add `pytest-cpp==2.3.0` to Linux Docker, MacOS, and Windows. ### Testing Build PyTorch and run `python test/run_test.py --cpp` on my laptop. CI change would come later in a separate PR. Also running `python test/run_test.py --help` now shows all C++ test discovered under `build/bin` Pull Request resolved: https://github.com/pytorch/pytorch/pull/99559 Approved by: https://github.com/clee2000
2023-04-22 00:23:31 +00:00
executable = get_executable_command(options, is_cpp_test=is_cpp_test)
if not executable:
# If there is no eligible executable returning here, it means an unsupported
# case such as coverage for C++ test. So just returning ok makes sense
return 0
if test_file.startswith(CPP_TEST_PREFIX):
Run C++ tests on CI with run_test.py (#99956) After https://github.com/pytorch/pytorch/pull/99559, we can now run C++ test with `run_test.py`. Although advance features such as `--import-slow-tests` and `--import-disabled-tests` won't work for now, there will still be a gain in reliability and performance as C++ can now be retried and run in parallel. This covers all C++ tests in the CI including aten, libtorch, and Vulkan C++ tests across all platforms Linux, Windows, MacOS. Notes: * To support C++ test discovery, the env variable `CPP_TESTS_DIR` can be set to where the C++ test binaries is located * Support pytest -k argument via run_test as this is used by pytest-cpp to replace `--gtest-filter` * The XML output is in pytest format, but it's ok now because we don't have slow test or flaky test support for C++ test yet * ~~I need to figure out why conftest.py doesn't work when I invoke pytest directly for C++ test, so `--sc` is not available for C++ tests at the moment. Proper pytest plugin like stepwise works fine though. I'll investigate and fix it in a separate PR~~ Found the cause, `conftest.py` is per directory and needs to be in any arbitrary directory that holds C++ test * Two tests `test_api` and `test_tensorexpr` timed out on ASAN, I suspect that ASAN is now used on top of the python executable, which is slower than running native C++ code. IMO, it's ok to run these tests as before on ASAN for now Pull Request resolved: https://github.com/pytorch/pytorch/pull/99956 Approved by: https://github.com/clee2000, https://github.com/ZainRizvi
2023-05-09 21:24:12 +00:00
# C++ tests are not the regular test directory
if CPP_TESTS_DIR:
cpp_test = os.path.join(
CPP_TESTS_DIR,
test_file.replace(f"{CPP_TEST_PREFIX}/", ""),
)
else:
cpp_test = os.path.join(
Path(test_directory).parent,
Discover and run C++ tests with run_test.py (#99559) This depends on [pytest-cpp](https://github.com/pytest-dev/pytest-cpp) to discover and run C++ tests with pytest. C++ tests are built under `${WORKSPACE}/build/bin` directory and copied to the test job under the same path. * To expose them to `run_test`, I choose to use the mock path prefix `cpp`, for example `build/bin/c10_Array_test` would be named as `cpp/c10_Array_test` and the `python test/run_test.py --cpp -i cpp/c10_Array_test` would run the test in the same way as other Python tests. I could copy them from `build/bin` to `test/cpp`, but it will be mixed with the source code and CMake file. So this looks easier * Some executable under `build/bin` are not C++ tests, and they are exclude, for example `build/bin/torch_shm_manager` * C++ tests need to run with pytest directly as python command doesn't understand it * The change is gated by the new `--cpp` argument to `run_test.py`, for example `python test/run_test.py --cpp` will run all available C++ tests * The tests can be run in parallel * Failing tests can be retried with `--reruns=2` and `--sw` ``` ============================= test session starts ============================== platform darwin -- Python 3.9.15, pytest-7.2.0, pluggy-1.0.0 -- /Users/huydo/miniconda3/envs/py3.9/bin/python3 cachedir: .pytest_cache hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/Users/huydo/Storage/mine/pytorch/test/.hypothesis/examples') rootdir: /Users/huydo/Storage/mine/pytorch, configfile: pytest.ini plugins: xdoctest-1.1.0, cpp-2.3.0, rerunfailures-10.3, shard-0.1.2, flakefinder-1.1.0, hypothesis-6.56.4, xdist-3.0.2, repeat-0.9.1 collecting ... collected 3 items / 2 deselected / 1 selected Running 1 items in this shard: build/bin/scalar_tensor_test::TestScalarTensor.TestScalarTensorMPS stepwise: skipping 2 already passed items. ../build/bin/scalar_tensor_test::TestScalarTensor::TestScalarTensorMPS RERUN [100%] ../build/bin/scalar_tensor_test::TestScalarTensor::TestScalarTensorMPS RERUN [100%] ../build/bin/scalar_tensor_test::TestScalarTensor::TestScalarTensorMPS FAILED [100%] ``` * `--import-slow-tests` and `--import-disabled-tests` won't work for now and that's ok to have it as a future task. I also add `pytest-cpp==2.3.0` to Linux Docker, MacOS, and Windows. ### Testing Build PyTorch and run `python test/run_test.py --cpp` on my laptop. CI change would come later in a separate PR. Also running `python test/run_test.py --help` now shows all C++ test discovered under `build/bin` Pull Request resolved: https://github.com/pytorch/pytorch/pull/99559 Approved by: https://github.com/clee2000
2023-04-22 00:23:31 +00:00
CPP_TEST_PATH,
test_file.replace(f"{CPP_TEST_PREFIX}/", ""),
)
Run C++ tests on CI with run_test.py (#99956) After https://github.com/pytorch/pytorch/pull/99559, we can now run C++ test with `run_test.py`. Although advance features such as `--import-slow-tests` and `--import-disabled-tests` won't work for now, there will still be a gain in reliability and performance as C++ can now be retried and run in parallel. This covers all C++ tests in the CI including aten, libtorch, and Vulkan C++ tests across all platforms Linux, Windows, MacOS. Notes: * To support C++ test discovery, the env variable `CPP_TESTS_DIR` can be set to where the C++ test binaries is located * Support pytest -k argument via run_test as this is used by pytest-cpp to replace `--gtest-filter` * The XML output is in pytest format, but it's ok now because we don't have slow test or flaky test support for C++ test yet * ~~I need to figure out why conftest.py doesn't work when I invoke pytest directly for C++ test, so `--sc` is not available for C++ tests at the moment. Proper pytest plugin like stepwise works fine though. I'll investigate and fix it in a separate PR~~ Found the cause, `conftest.py` is per directory and needs to be in any arbitrary directory that holds C++ test * Two tests `test_api` and `test_tensorexpr` timed out on ASAN, I suspect that ASAN is now used on top of the python executable, which is slower than running native C++ code. IMO, it's ok to run these tests as before on ASAN for now Pull Request resolved: https://github.com/pytorch/pytorch/pull/99956 Approved by: https://github.com/clee2000, https://github.com/ZainRizvi
2023-05-09 21:24:12 +00:00
argv = [
cpp_test if sys.platform != "win32" else cpp_test + ".exe"
Discover and run C++ tests with run_test.py (#99559) This depends on [pytest-cpp](https://github.com/pytest-dev/pytest-cpp) to discover and run C++ tests with pytest. C++ tests are built under `${WORKSPACE}/build/bin` directory and copied to the test job under the same path. * To expose them to `run_test`, I choose to use the mock path prefix `cpp`, for example `build/bin/c10_Array_test` would be named as `cpp/c10_Array_test` and the `python test/run_test.py --cpp -i cpp/c10_Array_test` would run the test in the same way as other Python tests. I could copy them from `build/bin` to `test/cpp`, but it will be mixed with the source code and CMake file. So this looks easier * Some executable under `build/bin` are not C++ tests, and they are exclude, for example `build/bin/torch_shm_manager` * C++ tests need to run with pytest directly as python command doesn't understand it * The change is gated by the new `--cpp` argument to `run_test.py`, for example `python test/run_test.py --cpp` will run all available C++ tests * The tests can be run in parallel * Failing tests can be retried with `--reruns=2` and `--sw` ``` ============================= test session starts ============================== platform darwin -- Python 3.9.15, pytest-7.2.0, pluggy-1.0.0 -- /Users/huydo/miniconda3/envs/py3.9/bin/python3 cachedir: .pytest_cache hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/Users/huydo/Storage/mine/pytorch/test/.hypothesis/examples') rootdir: /Users/huydo/Storage/mine/pytorch, configfile: pytest.ini plugins: xdoctest-1.1.0, cpp-2.3.0, rerunfailures-10.3, shard-0.1.2, flakefinder-1.1.0, hypothesis-6.56.4, xdist-3.0.2, repeat-0.9.1 collecting ... collected 3 items / 2 deselected / 1 selected Running 1 items in this shard: build/bin/scalar_tensor_test::TestScalarTensor.TestScalarTensorMPS stepwise: skipping 2 already passed items. ../build/bin/scalar_tensor_test::TestScalarTensor::TestScalarTensorMPS RERUN [100%] ../build/bin/scalar_tensor_test::TestScalarTensor::TestScalarTensorMPS RERUN [100%] ../build/bin/scalar_tensor_test::TestScalarTensor::TestScalarTensorMPS FAILED [100%] ``` * `--import-slow-tests` and `--import-disabled-tests` won't work for now and that's ok to have it as a future task. I also add `pytest-cpp==2.3.0` to Linux Docker, MacOS, and Windows. ### Testing Build PyTorch and run `python test/run_test.py --cpp` on my laptop. CI change would come later in a separate PR. Also running `python test/run_test.py --help` now shows all C++ test discovered under `build/bin` Pull Request resolved: https://github.com/pytorch/pytorch/pull/99559 Approved by: https://github.com/clee2000
2023-04-22 00:23:31 +00:00
] + unittest_args
else:
# Can't call `python -m unittest test_*` here because it doesn't run code
# in `if __name__ == '__main__': `. So call `python test_*.py` instead.
argv = [test_file + ".py"] + unittest_args
os.makedirs(REPO_ROOT / "test" / "test-reports", exist_ok=True)
if options.pipe_logs:
log_fd, log_path = tempfile.mkstemp(
dir=REPO_ROOT / "test" / "test-reports",
prefix=f"{sanitize_file_name(str(test_module))}_",
suffix="_toprint.log",
)
os.close(log_fd)
Discover and run C++ tests with run_test.py (#99559) This depends on [pytest-cpp](https://github.com/pytest-dev/pytest-cpp) to discover and run C++ tests with pytest. C++ tests are built under `${WORKSPACE}/build/bin` directory and copied to the test job under the same path. * To expose them to `run_test`, I choose to use the mock path prefix `cpp`, for example `build/bin/c10_Array_test` would be named as `cpp/c10_Array_test` and the `python test/run_test.py --cpp -i cpp/c10_Array_test` would run the test in the same way as other Python tests. I could copy them from `build/bin` to `test/cpp`, but it will be mixed with the source code and CMake file. So this looks easier * Some executable under `build/bin` are not C++ tests, and they are exclude, for example `build/bin/torch_shm_manager` * C++ tests need to run with pytest directly as python command doesn't understand it * The change is gated by the new `--cpp` argument to `run_test.py`, for example `python test/run_test.py --cpp` will run all available C++ tests * The tests can be run in parallel * Failing tests can be retried with `--reruns=2` and `--sw` ``` ============================= test session starts ============================== platform darwin -- Python 3.9.15, pytest-7.2.0, pluggy-1.0.0 -- /Users/huydo/miniconda3/envs/py3.9/bin/python3 cachedir: .pytest_cache hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/Users/huydo/Storage/mine/pytorch/test/.hypothesis/examples') rootdir: /Users/huydo/Storage/mine/pytorch, configfile: pytest.ini plugins: xdoctest-1.1.0, cpp-2.3.0, rerunfailures-10.3, shard-0.1.2, flakefinder-1.1.0, hypothesis-6.56.4, xdist-3.0.2, repeat-0.9.1 collecting ... collected 3 items / 2 deselected / 1 selected Running 1 items in this shard: build/bin/scalar_tensor_test::TestScalarTensor.TestScalarTensorMPS stepwise: skipping 2 already passed items. ../build/bin/scalar_tensor_test::TestScalarTensor::TestScalarTensorMPS RERUN [100%] ../build/bin/scalar_tensor_test::TestScalarTensor::TestScalarTensorMPS RERUN [100%] ../build/bin/scalar_tensor_test::TestScalarTensor::TestScalarTensorMPS FAILED [100%] ``` * `--import-slow-tests` and `--import-disabled-tests` won't work for now and that's ok to have it as a future task. I also add `pytest-cpp==2.3.0` to Linux Docker, MacOS, and Windows. ### Testing Build PyTorch and run `python test/run_test.py --cpp` on my laptop. CI change would come later in a separate PR. Also running `python test/run_test.py --help` now shows all C++ test discovered under `build/bin` Pull Request resolved: https://github.com/pytorch/pytorch/pull/99559 Approved by: https://github.com/clee2000
2023-04-22 00:23:31 +00:00
command = (launcher_cmd or []) + executable + argv
should_retry = (
"--subprocess" not in command
and not RERUN_DISABLED_TESTS
and not is_cpp_test
and "-n" not in command
)
timeout = (
None
if not options.enable_timeout
else THRESHOLD * 6
if IS_SLOW
else THRESHOLD * 3
if should_retry
and isinstance(test_module, ShardedTest)
and test_module.time is not None
Set timeout for C++ tests (#125517) Looking at the unrelated Windows timeout failure on https://github.com/pytorch/pytorch/pull/125199, it looks like we don't have a timeout value set for C++ tests atm. In this case, a C++ test on Windows timed out after 2+ hours. ``` 2024-05-02T23:35:34.0639067Z Running cpp/c10_TypeList_test 1/1 ... [2024-05-02 23:35:34.059021] 2024-05-02T23:35:34.0641108Z Executing ['pytest', 'C:\\actions-runner\\_work\\pytorch\\pytorch\\build\\win_tmp\\build\\torch\\test\\c10_TypeList_test.exe', '-m', 'not serial', '-v', '-vv', '-rfEX', '-n', '2', '--junit-xml-reruns', 'test-reports\\python-pytest\\test\\run_test\\test\\run_test-c898ddeff8f33cbf.xml', '-x', '--reruns=2'] ... [2024-05-02 23:35:34.062137] 2024-05-03T02:45:33.7862004Z Process SpawnPoolWorker-2: 2024-05-03T02:45:33.7927201Z Traceback (most recent call last): 2024-05-03T02:45:33.7928032Z File "C:\Jenkins\Miniconda3\lib\multiprocessing\process.py", line 315, in _bootstrap 2024-05-03T02:45:33.7928722Z self.run() 2024-05-03T02:45:33.7929722Z File "C:\Jenkins\Miniconda3\lib\multiprocessing\process.py", line 108, in run 2024-05-03T02:45:33.7931639Z self._target(*self._args, **self._kwargs) 2024-05-03T02:45:33.7932435Z File "C:\Jenkins\Miniconda3\lib\multiprocessing\pool.py", line 114, in worker 2024-05-03T02:45:33.7933338Z task = get() 2024-05-03T02:45:33.7933946Z File "C:\Jenkins\Miniconda3\lib\multiprocessing\queues.py", line 365, in get 2024-05-03T02:45:33.7935219Z res = self._reader.recv_bytes() 2024-05-03T02:45:33.7935897Z File "C:\Jenkins\Miniconda3\lib\multiprocessing\connection.py", line 221, in recv_bytes 2024-05-03T02:45:33.7936609Z buf = self._recv_bytes(maxlength) 2024-05-03T02:45:33.7937302Z File "C:\Jenkins\Miniconda3\lib\multiprocessing\connection.py", line 310, in _recv_bytes 2024-05-03T02:45:33.7938316Z waitres = _winapi.WaitForMultipleObjects( 2024-05-03T02:45:33.7938766Z KeyboardInterrupt ``` Retrying was working, but it was already too late to finish the job. I'm setting the same default `THRESHOLD * 3` timeout value here for C++ tests. Pull Request resolved: https://github.com/pytorch/pytorch/pull/125517 Approved by: https://github.com/clee2000
2024-05-07 16:41:38 +00:00
else THRESHOLD * 3
if is_cpp_test
else None
)
print_to_stderr(f"Executing {command} ... [{datetime.now()}]")
Discover and run C++ tests with run_test.py (#99559) This depends on [pytest-cpp](https://github.com/pytest-dev/pytest-cpp) to discover and run C++ tests with pytest. C++ tests are built under `${WORKSPACE}/build/bin` directory and copied to the test job under the same path. * To expose them to `run_test`, I choose to use the mock path prefix `cpp`, for example `build/bin/c10_Array_test` would be named as `cpp/c10_Array_test` and the `python test/run_test.py --cpp -i cpp/c10_Array_test` would run the test in the same way as other Python tests. I could copy them from `build/bin` to `test/cpp`, but it will be mixed with the source code and CMake file. So this looks easier * Some executable under `build/bin` are not C++ tests, and they are exclude, for example `build/bin/torch_shm_manager` * C++ tests need to run with pytest directly as python command doesn't understand it * The change is gated by the new `--cpp` argument to `run_test.py`, for example `python test/run_test.py --cpp` will run all available C++ tests * The tests can be run in parallel * Failing tests can be retried with `--reruns=2` and `--sw` ``` ============================= test session starts ============================== platform darwin -- Python 3.9.15, pytest-7.2.0, pluggy-1.0.0 -- /Users/huydo/miniconda3/envs/py3.9/bin/python3 cachedir: .pytest_cache hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/Users/huydo/Storage/mine/pytorch/test/.hypothesis/examples') rootdir: /Users/huydo/Storage/mine/pytorch, configfile: pytest.ini plugins: xdoctest-1.1.0, cpp-2.3.0, rerunfailures-10.3, shard-0.1.2, flakefinder-1.1.0, hypothesis-6.56.4, xdist-3.0.2, repeat-0.9.1 collecting ... collected 3 items / 2 deselected / 1 selected Running 1 items in this shard: build/bin/scalar_tensor_test::TestScalarTensor.TestScalarTensorMPS stepwise: skipping 2 already passed items. ../build/bin/scalar_tensor_test::TestScalarTensor::TestScalarTensorMPS RERUN [100%] ../build/bin/scalar_tensor_test::TestScalarTensor::TestScalarTensorMPS RERUN [100%] ../build/bin/scalar_tensor_test::TestScalarTensor::TestScalarTensorMPS FAILED [100%] ``` * `--import-slow-tests` and `--import-disabled-tests` won't work for now and that's ok to have it as a future task. I also add `pytest-cpp==2.3.0` to Linux Docker, MacOS, and Windows. ### Testing Build PyTorch and run `python test/run_test.py --cpp` on my laptop. CI change would come later in a separate PR. Also running `python test/run_test.py --help` now shows all C++ test discovered under `build/bin` Pull Request resolved: https://github.com/pytorch/pytorch/pull/99559 Approved by: https://github.com/clee2000
2023-04-22 00:23:31 +00:00
with ExitStack() as stack:
output = None
if options.pipe_logs:
output = stack.enter_context(open(log_path, "w"))
if should_retry:
ret_code, was_rerun = run_test_retries(
command,
test_directory,
env,
timeout,
stepcurrent_key,
output,
options.continue_through_error,
)
else:
command.extend([f"--sc={stepcurrent_key}", "--print-items"])
ret_code, was_rerun = retry_shell(
command,
test_directory,
stdout=output,
stderr=output,
env=env,
timeout=timeout,
retries=0,
)
# Pytest return code 5 means no test is collected. Exit code 4 is
# returned when the binary is not a C++ test executable, but 4 can
# also be returned if the file fails before running any tests. All
# binary files under build/bin that are not C++ test at the time of
# this writing have been excluded and new ones should be added to
# the list of exclusions in tools/testing/discover_tests.py
ret_code = 0 if ret_code == 5 else ret_code
if options.pipe_logs and print_log:
handle_log_file(
test_module, log_path, failed=(ret_code != 0), was_rerun=was_rerun
)
return ret_code
def install_cpp_extensions(cpp_extensions_test_dir, env=os.environ):
# Wipe the build folder, if it exists already
cpp_extensions_test_build_dir = os.path.join(cpp_extensions_test_dir, "build")
if os.path.exists(cpp_extensions_test_build_dir):
shutil.rmtree(cpp_extensions_test_build_dir)
# Build the test cpp extensions modules
cmd = [sys.executable, "setup.py", "install", "--root", "./install"]
return_code = shell(cmd, cwd=cpp_extensions_test_dir, env=env)
if return_code != 0:
return None, return_code
install_directory = ""
# install directory is the one that is named site-packages
for root, directories, _ in os.walk(
os.path.join(cpp_extensions_test_dir, "install")
):
for directory in directories:
if "-packages" in directory:
install_directory = os.path.join(root, directory)
assert install_directory, "install_directory must not be empty"
return install_directory, 0
@contextlib.contextmanager
def extend_python_path(install_directory):
python_path = os.environ.get("PYTHONPATH", "")
try:
os.environ["PYTHONPATH"] = os.pathsep.join([install_directory, python_path])
yield
finally:
os.environ["PYTHONPATH"] = python_path
def try_set_cpp_stack_traces(env, command, set=True):
# Print full c++ stack traces during retries
env = env or {}
env["TORCH_SHOW_CPP_STACKTRACES"] = "1" if set else "0"
return env
def run_test_retries(
command,
test_directory,
env,
timeout,
stepcurrent_key,
output,
continue_through_error,
):
# Run the test with -x to stop at first failure. Rerun the test by itself.
# If it succeeds, move on to the rest of the tests in a new process. If it
# still fails, see below
#
# If continue through error is not set, then we fail fast.
#
# If continue through error is set, then we skip that test, and keep going.
# Basically if the same test fails 3 times in a row, skip the test on the
# next run, but still fail in the end. I take advantage of the value saved
# in stepcurrent to keep track of the most recently run test (which is the
# one that failed if there was a failure).
def print_to_file(s):
print(s, file=output, flush=True)
num_failures = defaultdict(int)
print_items = ["--print-items"]
sc_command = f"--sc={stepcurrent_key}"
while True:
ret_code, _ = retry_shell(
command + [sc_command] + print_items,
test_directory,
stdout=output,
stderr=output,
env=env,
timeout=timeout,
retries=0, # no retries here, we do it ourselves, this is because it handles timeout exceptions well
)
ret_code = 0 if ret_code == 5 else ret_code
if ret_code == 0 and not sc_command.startswith("--rs="):
break # Got to the end of the test suite successfully
signal_name = f" ({SIGNALS_TO_NAMES_DICT[-ret_code]})" if ret_code < 0 else ""
print_to_file(f"Got exit code {ret_code}{signal_name}")
Discover and run C++ tests with run_test.py (#99559) This depends on [pytest-cpp](https://github.com/pytest-dev/pytest-cpp) to discover and run C++ tests with pytest. C++ tests are built under `${WORKSPACE}/build/bin` directory and copied to the test job under the same path. * To expose them to `run_test`, I choose to use the mock path prefix `cpp`, for example `build/bin/c10_Array_test` would be named as `cpp/c10_Array_test` and the `python test/run_test.py --cpp -i cpp/c10_Array_test` would run the test in the same way as other Python tests. I could copy them from `build/bin` to `test/cpp`, but it will be mixed with the source code and CMake file. So this looks easier * Some executable under `build/bin` are not C++ tests, and they are exclude, for example `build/bin/torch_shm_manager` * C++ tests need to run with pytest directly as python command doesn't understand it * The change is gated by the new `--cpp` argument to `run_test.py`, for example `python test/run_test.py --cpp` will run all available C++ tests * The tests can be run in parallel * Failing tests can be retried with `--reruns=2` and `--sw` ``` ============================= test session starts ============================== platform darwin -- Python 3.9.15, pytest-7.2.0, pluggy-1.0.0 -- /Users/huydo/miniconda3/envs/py3.9/bin/python3 cachedir: .pytest_cache hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/Users/huydo/Storage/mine/pytorch/test/.hypothesis/examples') rootdir: /Users/huydo/Storage/mine/pytorch, configfile: pytest.ini plugins: xdoctest-1.1.0, cpp-2.3.0, rerunfailures-10.3, shard-0.1.2, flakefinder-1.1.0, hypothesis-6.56.4, xdist-3.0.2, repeat-0.9.1 collecting ... collected 3 items / 2 deselected / 1 selected Running 1 items in this shard: build/bin/scalar_tensor_test::TestScalarTensor.TestScalarTensorMPS stepwise: skipping 2 already passed items. ../build/bin/scalar_tensor_test::TestScalarTensor::TestScalarTensorMPS RERUN [100%] ../build/bin/scalar_tensor_test::TestScalarTensor::TestScalarTensorMPS RERUN [100%] ../build/bin/scalar_tensor_test::TestScalarTensor::TestScalarTensorMPS FAILED [100%] ``` * `--import-slow-tests` and `--import-disabled-tests` won't work for now and that's ok to have it as a future task. I also add `pytest-cpp==2.3.0` to Linux Docker, MacOS, and Windows. ### Testing Build PyTorch and run `python test/run_test.py --cpp` on my laptop. CI change would come later in a separate PR. Also running `python test/run_test.py --help` now shows all C++ test discovered under `build/bin` Pull Request resolved: https://github.com/pytorch/pytorch/pull/99559 Approved by: https://github.com/clee2000
2023-04-22 00:23:31 +00:00
# Read what just failed/ran
try:
with open(
REPO_ROOT / ".pytest_cache/v/cache/stepcurrent" / stepcurrent_key
) as f:
current_failure = f.read()
except FileNotFoundError:
print_to_file(
"No stepcurrent file found. Either pytest didn't get to run (e.g. import error)"
+ " or file got deleted (contact dev infra)"
)
break
env = try_set_cpp_stack_traces(env, command, set=False)
if ret_code != 0:
num_failures[current_failure] += 1
if ret_code == 0:
# Rerunning the previously failing test succeeded, so now we can
# skip it and move on
sc_command = f"--scs={stepcurrent_key}"
print_to_file(
"Test succeeeded in new process, continuing with the rest of the tests"
)
elif num_failures[current_failure] >= 3:
if not continue_through_error:
print_to_file("Stopping at first consistent failure")
break
sc_command = f"--scs={stepcurrent_key}"
print_to_file(
"Test failed consistently, "
"continuing with the rest of the tests due to continue-through-error being set"
)
else:
env = try_set_cpp_stack_traces(env, command, set=True)
sc_command = f"--rs={stepcurrent_key}"
print_to_file("Retrying single test...")
print_items = [] # do not continue printing them, massive waste of space
consistent_failures = [x[1:-1] for x in num_failures.keys() if num_failures[x] >= 3]
flaky_failures = [x[1:-1] for x in num_failures.keys() if 0 < num_failures[x] < 3]
if len(flaky_failures) > 0:
print_to_file(
"The following tests failed and then succeeded when run in a new process"
+ f"{flaky_failures}",
)
if len(consistent_failures) > 0:
print_to_file(f"The following tests failed consistently: {consistent_failures}")
return 1, True
return ret_code, any(x > 0 for x in num_failures.values())
2018-03-09 21:02:02 +00:00
def run_test_with_subprocess(test_module, test_directory, options):
return run_test(
test_module, test_directory, options, extra_unittest_args=["--subprocess"]
)
download test times during build to avoid race conditions (#81915) After https://github.com/pytorch/pytorch/pull/81116, we started pulling test times straight from the source instead of first downloading them in the build job and then having the test job take the build jobs version. This can cause an issues where different shards pull different versions of the file, leading to incorrect sharding (ex two shards running the same tests file on accident). This generally happens if the test jobs happen while the test times file is being updated (unlikely, but not impossible) or if someone reruns a test job the next day. In this PR, I return to the old method of downloading the test times file during the build job and having the test jobs pull from the build jobs uploaded artifacts. If there is no test times file in the build job's artifacts, we fall back to the default sharding plan. Notes: * script moved to a new file to avoid needing to import torch, which would require torch to be built, which can cause issues with asan * I got errors with asan (`ASan runtime does not come first in initial library list; you should either link runtime to your application or manually preload it with LD_PRELOAD.`), so I put the script at the beginning of the build ### Test Plan Verified that the number of tests ran in the pull and trunk workflows are similar to workflows run on master. Checked logs to see if artifacts were being used for sharding. Spot checked a few test configs to check that their lists of selected tests didn't overlap. Pull Request resolved: https://github.com/pytorch/pytorch/pull/81915 Approved by: https://github.com/huydhn
2022-07-28 16:35:01 +00:00
def _test_cpp_extensions_aot(test_directory, options, use_ninja):
Add option to use ninja to compile ahead-of-time cpp_extensions (#32495) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/32495 Background ------------------------------ Previously, ninja was used to compile+link inline cpp_extensions and ahead-of-time cpp_extensions were compiled with distutils. This PR adds the ability to compile (but not link) ahead-of-time cpp_extensions with ninja. The main motivation for this is to speed up cpp_extension builds: distutils does not make use of parallelism. With this PR, using the new option, on my machine, - torchvision compilation goes from 3m43s to 49s - nestedtensor compilation goes from 2m0s to 28s. User-facing changes ------------------------------ I added a `use_ninja` flag to BuildExtension. This defaults to `True`. When `use_ninja` is True: - it will attempt to use ninja. - If we cannot use ninja, then this throws a warning and falls back to distutils. - Situations we cannot use ninja: Windows (NYI, I'll open a new issue for this), if ninja cannot be found on the system. Implementation Details ------------------------------ This PR makes this change in two steps. Please me know if it would be easier to review this if I split this up into a stacked diff. Those changes are: 1) refactor _write_ninja_file to separate the policy (what compiler flags to pass) from the mechanism (how to write the ninja file and do compilation). 2) call _write_ninja_file and _run_ninja_build while building ahead-of-time cpp_extensions. These are only used to compile objects; distutils still handles the linking. Change 1: refactor _write_ninja_file to seperate policy from mechanism - I split _write_ninja_file into: _write_ninja_file and _write_ninja_file_to_build_library - I renamed _build_extension_module to _run_ninja_build Change 2: Call _write_ninja_file while building ahead-of-time cpp_extensions - _write_ninja_file_and_compile_objects calls _write_ninja_file to only build object files. - We monkey-patch distutils.CCompiler.compile to call _write_ninja_files_and_compile_objects - distutils still handles the linking step. The linking step is not a bottleneck so it was not a concern. - This change only works on unix-based systems. Our code for windows goes down a different codepath and I did not want to mess with that. - If a system does not support ninja, we raise a warning and fall back to the original compilation path. Test Plan ------------------------------ Adhoc testing - I built torchvision using pytorch master and printed out the build commands. Next, I used this branch to build torchvision and looked at the ninja file. I compared the ninja file with the build commands and asserted that they were functionally the same. - I repeated the above for pytorch/nestedtensor. PyTorch test suite - I split `test_cpp_extensions` into `test_cpp_extensions_aot` and `test_cpp_extensions_jit`. The AOT (ahead-of-time) version tests ahead-of-time and the JIT version tests just-in-time (not to be confused with TorchScript) - `test_cpp_extensions_aot` gets run TWICE by run_test.py, once with a module that was built with ninja, and once with a module that was built without ninja. - run_test.py asserts that when we are building with use_ninja=True, ninja is actually available on the system. Test Plan: Imported from OSS Differential Revision: D19730432 Pulled By: zou3519 fbshipit-source-id: 819590d01cf65e8da5a1e8019b8b3084792fee90
2020-02-06 02:44:19 +00:00
if use_ninja:
try:
from torch.utils import cpp_extension
Add option to use ninja to compile ahead-of-time cpp_extensions (#32495) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/32495 Background ------------------------------ Previously, ninja was used to compile+link inline cpp_extensions and ahead-of-time cpp_extensions were compiled with distutils. This PR adds the ability to compile (but not link) ahead-of-time cpp_extensions with ninja. The main motivation for this is to speed up cpp_extension builds: distutils does not make use of parallelism. With this PR, using the new option, on my machine, - torchvision compilation goes from 3m43s to 49s - nestedtensor compilation goes from 2m0s to 28s. User-facing changes ------------------------------ I added a `use_ninja` flag to BuildExtension. This defaults to `True`. When `use_ninja` is True: - it will attempt to use ninja. - If we cannot use ninja, then this throws a warning and falls back to distutils. - Situations we cannot use ninja: Windows (NYI, I'll open a new issue for this), if ninja cannot be found on the system. Implementation Details ------------------------------ This PR makes this change in two steps. Please me know if it would be easier to review this if I split this up into a stacked diff. Those changes are: 1) refactor _write_ninja_file to separate the policy (what compiler flags to pass) from the mechanism (how to write the ninja file and do compilation). 2) call _write_ninja_file and _run_ninja_build while building ahead-of-time cpp_extensions. These are only used to compile objects; distutils still handles the linking. Change 1: refactor _write_ninja_file to seperate policy from mechanism - I split _write_ninja_file into: _write_ninja_file and _write_ninja_file_to_build_library - I renamed _build_extension_module to _run_ninja_build Change 2: Call _write_ninja_file while building ahead-of-time cpp_extensions - _write_ninja_file_and_compile_objects calls _write_ninja_file to only build object files. - We monkey-patch distutils.CCompiler.compile to call _write_ninja_files_and_compile_objects - distutils still handles the linking step. The linking step is not a bottleneck so it was not a concern. - This change only works on unix-based systems. Our code for windows goes down a different codepath and I did not want to mess with that. - If a system does not support ninja, we raise a warning and fall back to the original compilation path. Test Plan ------------------------------ Adhoc testing - I built torchvision using pytorch master and printed out the build commands. Next, I used this branch to build torchvision and looked at the ninja file. I compared the ninja file with the build commands and asserted that they were functionally the same. - I repeated the above for pytorch/nestedtensor. PyTorch test suite - I split `test_cpp_extensions` into `test_cpp_extensions_aot` and `test_cpp_extensions_jit`. The AOT (ahead-of-time) version tests ahead-of-time and the JIT version tests just-in-time (not to be confused with TorchScript) - `test_cpp_extensions_aot` gets run TWICE by run_test.py, once with a module that was built with ninja, and once with a module that was built without ninja. - run_test.py asserts that when we are building with use_ninja=True, ninja is actually available on the system. Test Plan: Imported from OSS Differential Revision: D19730432 Pulled By: zou3519 fbshipit-source-id: 819590d01cf65e8da5a1e8019b8b3084792fee90
2020-02-06 02:44:19 +00:00
cpp_extension.verify_ninja_availability()
except RuntimeError:
print_to_stderr(CPP_EXTENSIONS_ERROR)
Add option to use ninja to compile ahead-of-time cpp_extensions (#32495) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/32495 Background ------------------------------ Previously, ninja was used to compile+link inline cpp_extensions and ahead-of-time cpp_extensions were compiled with distutils. This PR adds the ability to compile (but not link) ahead-of-time cpp_extensions with ninja. The main motivation for this is to speed up cpp_extension builds: distutils does not make use of parallelism. With this PR, using the new option, on my machine, - torchvision compilation goes from 3m43s to 49s - nestedtensor compilation goes from 2m0s to 28s. User-facing changes ------------------------------ I added a `use_ninja` flag to BuildExtension. This defaults to `True`. When `use_ninja` is True: - it will attempt to use ninja. - If we cannot use ninja, then this throws a warning and falls back to distutils. - Situations we cannot use ninja: Windows (NYI, I'll open a new issue for this), if ninja cannot be found on the system. Implementation Details ------------------------------ This PR makes this change in two steps. Please me know if it would be easier to review this if I split this up into a stacked diff. Those changes are: 1) refactor _write_ninja_file to separate the policy (what compiler flags to pass) from the mechanism (how to write the ninja file and do compilation). 2) call _write_ninja_file and _run_ninja_build while building ahead-of-time cpp_extensions. These are only used to compile objects; distutils still handles the linking. Change 1: refactor _write_ninja_file to seperate policy from mechanism - I split _write_ninja_file into: _write_ninja_file and _write_ninja_file_to_build_library - I renamed _build_extension_module to _run_ninja_build Change 2: Call _write_ninja_file while building ahead-of-time cpp_extensions - _write_ninja_file_and_compile_objects calls _write_ninja_file to only build object files. - We monkey-patch distutils.CCompiler.compile to call _write_ninja_files_and_compile_objects - distutils still handles the linking step. The linking step is not a bottleneck so it was not a concern. - This change only works on unix-based systems. Our code for windows goes down a different codepath and I did not want to mess with that. - If a system does not support ninja, we raise a warning and fall back to the original compilation path. Test Plan ------------------------------ Adhoc testing - I built torchvision using pytorch master and printed out the build commands. Next, I used this branch to build torchvision and looked at the ninja file. I compared the ninja file with the build commands and asserted that they were functionally the same. - I repeated the above for pytorch/nestedtensor. PyTorch test suite - I split `test_cpp_extensions` into `test_cpp_extensions_aot` and `test_cpp_extensions_jit`. The AOT (ahead-of-time) version tests ahead-of-time and the JIT version tests just-in-time (not to be confused with TorchScript) - `test_cpp_extensions_aot` gets run TWICE by run_test.py, once with a module that was built with ninja, and once with a module that was built without ninja. - run_test.py asserts that when we are building with use_ninja=True, ninja is actually available on the system. Test Plan: Imported from OSS Differential Revision: D19730432 Pulled By: zou3519 fbshipit-source-id: 819590d01cf65e8da5a1e8019b8b3084792fee90
2020-02-06 02:44:19 +00:00
return 1
# Wipe the build folder, if it exists already
cpp_extensions_test_dir = os.path.join(test_directory, "cpp_extensions")
cpp_extensions_test_build_dir = os.path.join(cpp_extensions_test_dir, "build")
Add option to use ninja to compile ahead-of-time cpp_extensions (#32495) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/32495 Background ------------------------------ Previously, ninja was used to compile+link inline cpp_extensions and ahead-of-time cpp_extensions were compiled with distutils. This PR adds the ability to compile (but not link) ahead-of-time cpp_extensions with ninja. The main motivation for this is to speed up cpp_extension builds: distutils does not make use of parallelism. With this PR, using the new option, on my machine, - torchvision compilation goes from 3m43s to 49s - nestedtensor compilation goes from 2m0s to 28s. User-facing changes ------------------------------ I added a `use_ninja` flag to BuildExtension. This defaults to `True`. When `use_ninja` is True: - it will attempt to use ninja. - If we cannot use ninja, then this throws a warning and falls back to distutils. - Situations we cannot use ninja: Windows (NYI, I'll open a new issue for this), if ninja cannot be found on the system. Implementation Details ------------------------------ This PR makes this change in two steps. Please me know if it would be easier to review this if I split this up into a stacked diff. Those changes are: 1) refactor _write_ninja_file to separate the policy (what compiler flags to pass) from the mechanism (how to write the ninja file and do compilation). 2) call _write_ninja_file and _run_ninja_build while building ahead-of-time cpp_extensions. These are only used to compile objects; distutils still handles the linking. Change 1: refactor _write_ninja_file to seperate policy from mechanism - I split _write_ninja_file into: _write_ninja_file and _write_ninja_file_to_build_library - I renamed _build_extension_module to _run_ninja_build Change 2: Call _write_ninja_file while building ahead-of-time cpp_extensions - _write_ninja_file_and_compile_objects calls _write_ninja_file to only build object files. - We monkey-patch distutils.CCompiler.compile to call _write_ninja_files_and_compile_objects - distutils still handles the linking step. The linking step is not a bottleneck so it was not a concern. - This change only works on unix-based systems. Our code for windows goes down a different codepath and I did not want to mess with that. - If a system does not support ninja, we raise a warning and fall back to the original compilation path. Test Plan ------------------------------ Adhoc testing - I built torchvision using pytorch master and printed out the build commands. Next, I used this branch to build torchvision and looked at the ninja file. I compared the ninja file with the build commands and asserted that they were functionally the same. - I repeated the above for pytorch/nestedtensor. PyTorch test suite - I split `test_cpp_extensions` into `test_cpp_extensions_aot` and `test_cpp_extensions_jit`. The AOT (ahead-of-time) version tests ahead-of-time and the JIT version tests just-in-time (not to be confused with TorchScript) - `test_cpp_extensions_aot` gets run TWICE by run_test.py, once with a module that was built with ninja, and once with a module that was built without ninja. - run_test.py asserts that when we are building with use_ninja=True, ninja is actually available on the system. Test Plan: Imported from OSS Differential Revision: D19730432 Pulled By: zou3519 fbshipit-source-id: 819590d01cf65e8da5a1e8019b8b3084792fee90
2020-02-06 02:44:19 +00:00
if os.path.exists(cpp_extensions_test_build_dir):
shutil.rmtree(cpp_extensions_test_build_dir)
# Build the test cpp extensions modules
shell_env = os.environ.copy()
shell_env["USE_NINJA"] = str(1 if use_ninja else 0)
Enable CPP/CUDAExtension with py_limited_api for python agnosticism (#138088) Getting tested with ao, but now there is a real test i added. ## What does this PR do? We want to allow custom PyTorch extensions to be able to build one wheel for multiple Python versions, in other words, achieve python agnosticism. It turns out that there is such a way that setuptools/Python provides already! Namely, if the user promises to use only the Python limited API in their extension, they can pass in `py_limited_api` to their Extension class and to the bdist_wheel command (with a min python version) in order to build 1 wheel that will suffice across multiple Python versions. Sounds lovely! Why don't people do that already with PyTorch? Well 2 things. This workflow is hardly documented (even searching for python agnostic specifically does not reveal many answers) so I'd expect that people simply don't know about it. But even if they did, _PyTorch_ custom Extensions would still not work because we always link torch_python, which does not abide by py_limited_api rules. So this is where this PR comes in! We respect when the user specifies py_limited_api and skip linking torch_python under that condition, allowing users to enroll in the provided functionality I just described. ## How do I know this PR works? I manually tested my silly little ultra_norm locally (with `import python_agnostic`) and wrote a test case for the extension showing that - torch_python doesn't show up in the ldd tree - no Py- symbols show up It may be a little confusing that our test case is actually python-free (more clean than python-agnostic) but it is sufficient (and not necessary) towards showing that this change works. Pull Request resolved: https://github.com/pytorch/pytorch/pull/138088 Approved by: https://github.com/ezyang, https://github.com/albanD
2024-12-11 14:55:47 +00:00
install_cmd = [sys.executable, "setup.py", "install", "--root", "./install"]
wheel_cmd = [sys.executable, "setup.py", "bdist_wheel"]
return_code = shell(install_cmd, cwd=cpp_extensions_test_dir, env=shell_env)
if return_code != 0:
return return_code
if sys.platform != "win32":
Enable CPP/CUDAExtension with py_limited_api for python agnosticism (#138088) Getting tested with ao, but now there is a real test i added. ## What does this PR do? We want to allow custom PyTorch extensions to be able to build one wheel for multiple Python versions, in other words, achieve python agnosticism. It turns out that there is such a way that setuptools/Python provides already! Namely, if the user promises to use only the Python limited API in their extension, they can pass in `py_limited_api` to their Extension class and to the bdist_wheel command (with a min python version) in order to build 1 wheel that will suffice across multiple Python versions. Sounds lovely! Why don't people do that already with PyTorch? Well 2 things. This workflow is hardly documented (even searching for python agnostic specifically does not reveal many answers) so I'd expect that people simply don't know about it. But even if they did, _PyTorch_ custom Extensions would still not work because we always link torch_python, which does not abide by py_limited_api rules. So this is where this PR comes in! We respect when the user specifies py_limited_api and skip linking torch_python under that condition, allowing users to enroll in the provided functionality I just described. ## How do I know this PR works? I manually tested my silly little ultra_norm locally (with `import python_agnostic`) and wrote a test case for the extension showing that - torch_python doesn't show up in the ldd tree - no Py- symbols show up It may be a little confusing that our test case is actually python-free (more clean than python-agnostic) but it is sufficient (and not necessary) towards showing that this change works. Pull Request resolved: https://github.com/pytorch/pytorch/pull/138088 Approved by: https://github.com/ezyang, https://github.com/albanD
2024-12-11 14:55:47 +00:00
exts_to_build = [(install_cmd, "no_python_abi_suffix_test")]
if TEST_CUDA:
exts_to_build.append((wheel_cmd, "python_agnostic_extension"))
for cmd, extension_dir in exts_to_build:
return_code = shell(
cmd,
cwd=os.path.join(cpp_extensions_test_dir, extension_dir),
env=shell_env,
)
if return_code != 0:
return return_code
2018-03-09 21:02:02 +00:00
from shutil import copyfile
os.environ["USE_NINJA"] = shell_env["USE_NINJA"]
test_module = "test_cpp_extensions_aot" + ("_ninja" if use_ninja else "_no_ninja")
copyfile(
test_directory + "/test_cpp_extensions_aot.py",
test_directory + "/" + test_module + ".py",
)
2018-03-11 00:16:40 +00:00
try:
cpp_extensions = os.path.join(test_directory, "cpp_extensions")
install_directory = ""
# install directory is the one that is named site-packages
for root, directories, _ in os.walk(os.path.join(cpp_extensions, "install")):
for directory in directories:
if "-packages" in directory:
install_directory = os.path.join(root, directory)
assert install_directory, "install_directory must not be empty"
with extend_python_path(install_directory):
return run_test(ShardedTest(test_module, 1, 1), test_directory, options)
2018-03-11 00:16:40 +00:00
finally:
if os.path.exists(test_directory + "/" + test_module + ".py"):
os.remove(test_directory + "/" + test_module + ".py")
os.environ.pop("USE_NINJA")
2018-03-09 21:02:02 +00:00
def test_cpp_extensions_aot_ninja(test_module, test_directory, options):
return _test_cpp_extensions_aot(test_directory, options, use_ninja=True)
Add option to use ninja to compile ahead-of-time cpp_extensions (#32495) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/32495 Background ------------------------------ Previously, ninja was used to compile+link inline cpp_extensions and ahead-of-time cpp_extensions were compiled with distutils. This PR adds the ability to compile (but not link) ahead-of-time cpp_extensions with ninja. The main motivation for this is to speed up cpp_extension builds: distutils does not make use of parallelism. With this PR, using the new option, on my machine, - torchvision compilation goes from 3m43s to 49s - nestedtensor compilation goes from 2m0s to 28s. User-facing changes ------------------------------ I added a `use_ninja` flag to BuildExtension. This defaults to `True`. When `use_ninja` is True: - it will attempt to use ninja. - If we cannot use ninja, then this throws a warning and falls back to distutils. - Situations we cannot use ninja: Windows (NYI, I'll open a new issue for this), if ninja cannot be found on the system. Implementation Details ------------------------------ This PR makes this change in two steps. Please me know if it would be easier to review this if I split this up into a stacked diff. Those changes are: 1) refactor _write_ninja_file to separate the policy (what compiler flags to pass) from the mechanism (how to write the ninja file and do compilation). 2) call _write_ninja_file and _run_ninja_build while building ahead-of-time cpp_extensions. These are only used to compile objects; distutils still handles the linking. Change 1: refactor _write_ninja_file to seperate policy from mechanism - I split _write_ninja_file into: _write_ninja_file and _write_ninja_file_to_build_library - I renamed _build_extension_module to _run_ninja_build Change 2: Call _write_ninja_file while building ahead-of-time cpp_extensions - _write_ninja_file_and_compile_objects calls _write_ninja_file to only build object files. - We monkey-patch distutils.CCompiler.compile to call _write_ninja_files_and_compile_objects - distutils still handles the linking step. The linking step is not a bottleneck so it was not a concern. - This change only works on unix-based systems. Our code for windows goes down a different codepath and I did not want to mess with that. - If a system does not support ninja, we raise a warning and fall back to the original compilation path. Test Plan ------------------------------ Adhoc testing - I built torchvision using pytorch master and printed out the build commands. Next, I used this branch to build torchvision and looked at the ninja file. I compared the ninja file with the build commands and asserted that they were functionally the same. - I repeated the above for pytorch/nestedtensor. PyTorch test suite - I split `test_cpp_extensions` into `test_cpp_extensions_aot` and `test_cpp_extensions_jit`. The AOT (ahead-of-time) version tests ahead-of-time and the JIT version tests just-in-time (not to be confused with TorchScript) - `test_cpp_extensions_aot` gets run TWICE by run_test.py, once with a module that was built with ninja, and once with a module that was built without ninja. - run_test.py asserts that when we are building with use_ninja=True, ninja is actually available on the system. Test Plan: Imported from OSS Differential Revision: D19730432 Pulled By: zou3519 fbshipit-source-id: 819590d01cf65e8da5a1e8019b8b3084792fee90
2020-02-06 02:44:19 +00:00
def test_cpp_extensions_aot_no_ninja(test_module, test_directory, options):
return _test_cpp_extensions_aot(test_directory, options, use_ninja=False)
Add option to use ninja to compile ahead-of-time cpp_extensions (#32495) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/32495 Background ------------------------------ Previously, ninja was used to compile+link inline cpp_extensions and ahead-of-time cpp_extensions were compiled with distutils. This PR adds the ability to compile (but not link) ahead-of-time cpp_extensions with ninja. The main motivation for this is to speed up cpp_extension builds: distutils does not make use of parallelism. With this PR, using the new option, on my machine, - torchvision compilation goes from 3m43s to 49s - nestedtensor compilation goes from 2m0s to 28s. User-facing changes ------------------------------ I added a `use_ninja` flag to BuildExtension. This defaults to `True`. When `use_ninja` is True: - it will attempt to use ninja. - If we cannot use ninja, then this throws a warning and falls back to distutils. - Situations we cannot use ninja: Windows (NYI, I'll open a new issue for this), if ninja cannot be found on the system. Implementation Details ------------------------------ This PR makes this change in two steps. Please me know if it would be easier to review this if I split this up into a stacked diff. Those changes are: 1) refactor _write_ninja_file to separate the policy (what compiler flags to pass) from the mechanism (how to write the ninja file and do compilation). 2) call _write_ninja_file and _run_ninja_build while building ahead-of-time cpp_extensions. These are only used to compile objects; distutils still handles the linking. Change 1: refactor _write_ninja_file to seperate policy from mechanism - I split _write_ninja_file into: _write_ninja_file and _write_ninja_file_to_build_library - I renamed _build_extension_module to _run_ninja_build Change 2: Call _write_ninja_file while building ahead-of-time cpp_extensions - _write_ninja_file_and_compile_objects calls _write_ninja_file to only build object files. - We monkey-patch distutils.CCompiler.compile to call _write_ninja_files_and_compile_objects - distutils still handles the linking step. The linking step is not a bottleneck so it was not a concern. - This change only works on unix-based systems. Our code for windows goes down a different codepath and I did not want to mess with that. - If a system does not support ninja, we raise a warning and fall back to the original compilation path. Test Plan ------------------------------ Adhoc testing - I built torchvision using pytorch master and printed out the build commands. Next, I used this branch to build torchvision and looked at the ninja file. I compared the ninja file with the build commands and asserted that they were functionally the same. - I repeated the above for pytorch/nestedtensor. PyTorch test suite - I split `test_cpp_extensions` into `test_cpp_extensions_aot` and `test_cpp_extensions_jit`. The AOT (ahead-of-time) version tests ahead-of-time and the JIT version tests just-in-time (not to be confused with TorchScript) - `test_cpp_extensions_aot` gets run TWICE by run_test.py, once with a module that was built with ninja, and once with a module that was built without ninja. - run_test.py asserts that when we are building with use_ninja=True, ninja is actually available on the system. Test Plan: Imported from OSS Differential Revision: D19730432 Pulled By: zou3519 fbshipit-source-id: 819590d01cf65e8da5a1e8019b8b3084792fee90
2020-02-06 02:44:19 +00:00
def test_autoload_enable(test_module, test_directory, options):
return _test_autoload(test_directory, options, enable=True)
def test_autoload_disable(test_module, test_directory, options):
return _test_autoload(test_directory, options, enable=False)
def _test_autoload(test_directory, options, enable=True):
cpp_extensions_test_dir = os.path.join(test_directory, "cpp_extensions")
install_directory, return_code = install_cpp_extensions(cpp_extensions_test_dir)
if return_code != 0:
return return_code
try:
os.environ["TORCH_DEVICE_BACKEND_AUTOLOAD"] = str(int(enable))
with extend_python_path(install_directory):
cmd = [sys.executable, "test_autoload.py"]
return_code = shell(cmd, cwd=test_directory, env=os.environ)
return return_code
finally:
os.environ.pop("TORCH_DEVICE_BACKEND_AUTOLOAD")
def run_test_with_openreg(test_module, test_directory, options):
openreg_dir = os.path.join(
test_directory, "cpp_extensions", "open_registration_extension"
)
install_dir, return_code = install_cpp_extensions(openreg_dir)
if return_code != 0:
return return_code
with extend_python_path(install_dir):
return run_test(test_module, test_directory, options)
def test_distributed(test_module, test_directory, options):
# MPI tests are broken with Python-3.9
mpi_available = subprocess.call(
"command -v mpiexec", shell=True
) == 0 and sys.version_info < (3, 9)
if options.verbose and not mpi_available:
print_to_stderr("MPI not available -- MPI backend tests will be skipped")
config = DISTRIBUTED_TESTS_CONFIG
for backend, env_vars in config.items():
if sys.platform == "win32" and backend != "gloo":
continue
if backend == "mpi" and not mpi_available:
continue
for with_init_file in {True, False}:
if sys.platform == "win32" and not with_init_file:
continue
2018-03-09 21:02:02 +00:00
tmp_dir = tempfile.mkdtemp()
if options.verbose:
Enable test_distributed to work with spawn mode (#41769) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/41769 Currently the tests in `test_distributed` only work with the `fork` mode multiprocessing, this PR introduces support for `spawn` mode multiprocessing as well (while keeping the `fork` mode intact). Motivations for the change: 1) Spawn multiprocessing is the default on MacOS, so it better emulates how MacOS users would use distributed 2) With python 3.8+, spawn is the default on linux, so we should have test coverage for this 3) PT multiprocessing suggests using spawn/forkserver over fork, for sharing cuda tensors: https://pytorch.org/docs/stable/multiprocessing.html 4) Spawn is better supported with respect to certain sanitizers such as TSAN, so adding this sanitizer coverage may help us uncover issues. How it is done: 1) Move `test_distributed` tests in `_DistTestBase` class to a shared file `distributed_test` (similar to how the RPC tests are structured) 2) For `Barrier`, refactor the setup of temp directories, as the current version did not work with spawn, each process would get a different randomly generated directory and thus would write to different barriers. 3) Add all the relevant builds to run internally and in OSS. Running test_distributed with spawn mode in OSS can be done with: `python test/run_test.py -i distributed/test_distributed_spawn -v` Reviewed By: izdeby Differential Revision: D22408023 fbshipit-source-id: e206be16961fd80438f995e221f18139d7e6d2a9
2020-09-09 06:08:55 +00:00
init_str = "with {} init_method"
with_init = init_str.format("file" if with_init_file else "env")
print_to_stderr(
f"Running distributed tests for the {backend} backend {with_init}"
)
shard `pull / linux-xenial-cuda11.3-py3.7-gcc7 / test (distributed` 1->2 Fixes #ISSUE_NUMBER shard `pull / linux-xenial-cuda11.3-py3.7-gcc7 / test (distributed ...` from 1 shard to 2 Pros: - It currently takes about 2.6 hours and is 3rd longest running job on pull - Theoretically minimal overhead Cons: - Requires changes to the run_test.py which might have correctness issues Notes: - Cannot shard further as one of the test files is responsible for about half of the total run time spreadsheet regarding sharding: https://docs.google.com/spreadsheets/d/1BdtVsjRr0Is9LXMNilR02FEdPXNq7zEWl8AmR3ArsLQ/edit#gid=1153012347 Test Plan: <details><summary>expand to see test plan (its long)</summary> tests from a commit ran on master (90 tests ran) ``` 2022-05-03T12:45:34.7974184Z Selected tests: 2022-05-03T12:45:34.7974495Z distributed/_shard/sharded_optim/test_sharded_optim 2022-05-03T12:45:34.7974839Z distributed/_shard/sharded_tensor/ops/test_binary_cmp 2022-05-03T12:45:34.7975209Z distributed/_shard/sharded_tensor/ops/test_elementwise_ops 2022-05-03T12:45:34.7975575Z distributed/_shard/sharded_tensor/ops/test_embedding 2022-05-03T12:45:34.7976180Z distributed/_shard/sharded_tensor/ops/test_embedding_bag 2022-05-03T12:45:34.7976802Z distributed/_shard/sharded_tensor/ops/test_init 2022-05-03T12:45:34.7977361Z distributed/_shard/sharded_tensor/ops/test_linear 2022-05-03T12:45:34.7978157Z distributed/_shard/sharded_tensor/ops/test_math_ops 2022-05-03T12:45:34.7978879Z distributed/_shard/sharded_tensor/test_megatron_prototype 2022-05-03T12:45:34.7979594Z distributed/_shard/sharded_tensor/test_sharded_tensor 2022-05-03T12:45:34.7980366Z distributed/_shard/sharded_tensor/test_sharded_tensor_reshard 2022-05-03T12:45:34.7981066Z distributed/_shard/sharding_plan/test_sharding_plan 2022-05-03T12:45:34.7981877Z distributed/_shard/sharding_spec/test_sharding_spec 2022-05-03T12:45:34.7982387Z distributed/_shard/test_partial_tensor 2022-05-03T12:45:34.7982691Z distributed/_shard/test_replicated_tensor 2022-05-03T12:45:34.7982994Z distributed/_shard/test_sharder 2022-05-03T12:45:34.7983280Z distributed/algorithms/test_join 2022-05-03T12:45:34.7983695Z distributed/elastic/events/lib_test 2022-05-03T12:45:34.7983984Z distributed/elastic/metrics/api_test 2022-05-03T12:45:34.7984308Z distributed/elastic/multiprocessing/api_test 2022-05-03T12:45:34.7984624Z distributed/elastic/timer/api_test 2022-05-03T12:45:34.7984924Z distributed/elastic/timer/local_timer_example 2022-05-03T12:45:34.7985254Z distributed/elastic/timer/local_timer_test 2022-05-03T12:45:34.7985575Z distributed/elastic/utils/distributed_test 2022-05-03T12:45:34.7985889Z distributed/elastic/utils/logging_test 2022-05-03T12:45:34.7986176Z distributed/elastic/utils/util_test 2022-05-03T12:45:34.7986492Z distributed/fsdp/test_flatten_params_wrapper 2022-05-03T12:45:34.7986799Z distributed/fsdp/test_fsdp_apply 2022-05-03T12:45:34.7987078Z distributed/fsdp/test_fsdp_checkpoint 2022-05-03T12:45:34.7987388Z distributed/fsdp/test_fsdp_clip_grad_norm 2022-05-03T12:45:34.7987691Z distributed/fsdp/test_fsdp_comm 2022-05-03T12:45:34.7987961Z distributed/fsdp/test_fsdp_core 2022-05-03T12:45:34.7988251Z distributed/fsdp/test_fsdp_exec_order 2022-05-03T12:45:34.7988570Z distributed/fsdp/test_fsdp_freezing_weights 2022-05-03T12:45:34.7988865Z distributed/fsdp/test_fsdp_grad_acc 2022-05-03T12:45:34.7989176Z distributed/fsdp/test_fsdp_ignored_modules 2022-05-03T12:45:34.7989478Z distributed/fsdp/test_fsdp_input 2022-05-03T12:45:34.7989950Z distributed/fsdp/test_fsdp_memory 2022-05-03T12:45:34.7990241Z distributed/fsdp/test_fsdp_meta 2022-05-03T12:45:34.7990640Z distributed/fsdp/test_fsdp_mixed_precision 2022-05-03T12:45:34.7990964Z distributed/fsdp/test_fsdp_multiple_forward 2022-05-03T12:45:34.7991293Z distributed/fsdp/test_fsdp_multiple_wrapping 2022-05-03T12:45:34.7991610Z distributed/fsdp/test_fsdp_optim_state 2022-05-03T12:45:34.7991895Z distributed/fsdp/test_fsdp_overlap 2022-05-03T12:45:34.7992195Z distributed/fsdp/test_fsdp_pure_fp16 2022-05-03T12:45:34.7992500Z distributed/fsdp/test_fsdp_state_dict 2022-05-03T12:45:34.7992818Z distributed/fsdp/test_fsdp_summon_full_params 2022-05-03T12:45:34.7993117Z distributed/fsdp/test_fsdp_traversal 2022-05-03T12:45:34.7993861Z distributed/fsdp/test_fsdp_uneven 2022-05-03T12:45:34.7994181Z distributed/fsdp/test_shard_utils 2022-05-03T12:45:34.7994447Z distributed/fsdp/test_utils 2022-05-03T12:45:34.7994721Z distributed/fsdp/test_wrap 2022-05-03T12:45:34.7995015Z distributed/nn/jit/test_instantiator 2022-05-03T12:45:34.7995328Z distributed/optim/test_zero_redundancy_optimizer 2022-05-03T12:45:34.7995664Z distributed/pipeline/sync/skip/test_api 2022-05-03T12:45:34.7995983Z distributed/pipeline/sync/skip/test_gpipe 2022-05-03T12:45:34.7996315Z distributed/pipeline/sync/skip/test_inspect_skip_layout 2022-05-03T12:45:34.7996652Z distributed/pipeline/sync/skip/test_leak 2022-05-03T12:45:34.7996977Z distributed/pipeline/sync/skip/test_portal 2022-05-03T12:45:34.7997292Z distributed/pipeline/sync/skip/test_stash_pop 2022-05-03T12:45:34.7997623Z distributed/pipeline/sync/skip/test_tracker 2022-05-03T12:45:34.7997968Z distributed/pipeline/sync/skip/test_verify_skippables 2022-05-03T12:45:34.7998301Z distributed/pipeline/sync/test_balance 2022-05-03T12:45:34.7998591Z distributed/pipeline/sync/test_bugs 2022-05-03T12:45:34.7998927Z distributed/pipeline/sync/test_checkpoint 2022-05-03T12:45:34.7999243Z distributed/pipeline/sync/test_copy 2022-05-03T12:45:34.7999557Z distributed/pipeline/sync/test_deferred_batch_norm 2022-05-03T12:45:34.7999896Z distributed/pipeline/sync/test_dependency 2022-05-03T12:45:34.8000215Z distributed/pipeline/sync/test_inplace 2022-05-03T12:45:34.8000516Z distributed/pipeline/sync/test_microbatch 2022-05-03T12:45:34.8000826Z distributed/pipeline/sync/test_phony 2022-05-03T12:45:34.8001130Z distributed/pipeline/sync/test_pipe 2022-05-03T12:45:34.8001424Z distributed/pipeline/sync/test_pipeline 2022-05-03T12:45:34.8001733Z distributed/pipeline/sync/test_stream 2022-05-03T12:45:34.8002055Z distributed/pipeline/sync/test_transparency 2022-05-03T12:45:34.8002353Z distributed/pipeline/sync/test_worker 2022-05-03T12:45:34.8002672Z distributed/rpc/cuda/test_tensorpipe_agent 2022-05-03T12:45:34.8002982Z distributed/rpc/test_faulty_agent 2022-05-03T12:45:34.8003270Z distributed/rpc/test_tensorpipe_agent 2022-05-03T12:45:34.8003568Z distributed/test_c10d_common 2022-05-03T12:45:34.8003839Z distributed/test_c10d_gloo 2022-05-03T12:45:34.8004088Z distributed/test_c10d_nccl 2022-05-03T12:45:34.8004369Z distributed/test_c10d_spawn_gloo 2022-05-03T12:45:34.8004656Z distributed/test_c10d_spawn_nccl 2022-05-03T12:45:34.8004938Z distributed/test_data_parallel 2022-05-03T12:45:34.8005212Z distributed/test_distributed_spawn 2022-05-03T12:45:34.8005496Z distributed/test_launcher 2022-05-03T12:45:34.8005767Z distributed/test_nccl 2022-05-03T12:45:34.8006019Z distributed/test_pg_wrapper 2022-05-03T12:45:34.8006285Z distributed/test_store ``` tests ran on first shard for distributed on this PR (34 tests) ``` 2022-05-02T21:26:00.1385256Z Selected tests: 2022-05-02T21:26:00.1385767Z distributed/test_distributed_spawn 2022-05-02T21:26:00.1386403Z distributed/elastic/multiprocessing/api_test 2022-05-02T21:26:00.1387051Z distributed/fsdp/test_fsdp_memory 2022-05-02T21:26:00.1387607Z distributed/fsdp/test_fsdp_ignored_modules 2022-05-02T21:26:00.1388179Z distributed/fsdp/test_fsdp_apply 2022-05-02T21:26:00.1388600Z distributed/_shard/sharded_tensor/ops/test_binary_cmp 2022-05-02T21:26:00.1389181Z distributed/_shard/sharding_spec/test_sharding_spec 2022-05-02T21:26:00.1389545Z distributed/_shard/sharded_tensor/ops/test_linear 2022-05-02T21:26:00.1389878Z distributed/fsdp/test_fsdp_uneven 2022-05-02T21:26:00.1390186Z distributed/fsdp/test_fsdp_multiple_wrapping 2022-05-02T21:26:00.1390526Z distributed/fsdp/test_fsdp_multiple_forward 2022-05-02T21:26:00.1390877Z distributed/_shard/sharded_tensor/ops/test_embedding 2022-05-02T21:26:00.1391219Z distributed/_shard/test_partial_tensor 2022-05-02T21:26:00.1391542Z distributed/_shard/sharded_optim/test_sharded_optim 2022-05-02T21:26:00.1391915Z distributed/_shard/sharded_tensor/ops/test_elementwise_ops 2022-05-02T21:26:00.1392297Z distributed/fsdp/test_flatten_params_wrapper 2022-05-02T21:26:00.1392585Z distributed/fsdp/test_utils 2022-05-02T21:26:00.1392883Z distributed/nn/jit/test_instantiator 2022-05-02T21:26:00.1393167Z distributed/test_nccl 2022-05-02T21:26:00.1393466Z distributed/_shard/sharding_plan/test_sharding_plan 2022-05-02T21:26:00.1393787Z distributed/_shard/test_sharder 2022-05-02T21:26:00.1394085Z distributed/elastic/timer/api_test 2022-05-02T21:26:00.1394383Z distributed/pipeline/sync/skip/test_api 2022-05-02T21:26:00.1394738Z distributed/pipeline/sync/skip/test_inspect_skip_layout 2022-05-02T21:26:00.1395090Z distributed/pipeline/sync/skip/test_portal 2022-05-02T21:26:00.1395424Z distributed/pipeline/sync/skip/test_tracker 2022-05-02T21:26:00.1395935Z distributed/pipeline/sync/test_balance 2022-05-02T21:26:00.1396288Z distributed/pipeline/sync/test_checkpoint 2022-05-02T21:26:00.1396635Z distributed/pipeline/sync/test_deferred_batch_norm 2022-05-02T21:26:00.1396953Z distributed/pipeline/sync/test_inplace 2022-05-02T21:26:00.1397269Z distributed/pipeline/sync/test_phony 2022-05-02T21:26:00.1397587Z distributed/pipeline/sync/test_pipeline 2022-05-02T21:26:00.1397903Z distributed/pipeline/sync/test_transparency 2022-05-02T21:26:00.1398221Z distributed/rpc/test_faulty_agent ``` tests ran on second shard for distributed on this PR (56 tests) ``` 2022-05-02T21:26:55.1342892Z Selected tests: 2022-05-02T21:26:55.1343201Z distributed/rpc/cuda/test_tensorpipe_agent 2022-05-02T21:26:55.1343526Z distributed/fsdp/test_fsdp_core 2022-05-02T21:26:55.1343829Z distributed/test_c10d_nccl 2022-05-02T21:26:55.1344089Z distributed/test_c10d_gloo 2022-05-02T21:26:55.1344408Z distributed/fsdp/test_fsdp_summon_full_params 2022-05-02T21:26:55.1344749Z distributed/fsdp/test_fsdp_mixed_precision 2022-05-02T21:26:55.1345085Z distributed/optim/test_zero_redundancy_optimizer 2022-05-02T21:26:55.1345423Z distributed/fsdp/test_fsdp_optim_state 2022-05-02T21:26:55.1345773Z distributed/_shard/sharded_tensor/test_sharded_tensor 2022-05-02T21:26:55.1346088Z distributed/fsdp/test_fsdp_state_dict 2022-05-02T21:26:55.1346379Z distributed/test_store 2022-05-02T21:26:55.1346661Z distributed/test_c10d_spawn_gloo 2022-05-02T21:26:55.1346966Z distributed/test_pg_wrapper 2022-05-02T21:26:55.1347252Z distributed/test_c10d_spawn_nccl 2022-05-02T21:26:55.1347565Z distributed/fsdp/test_fsdp_clip_grad_norm 2022-05-02T21:26:55.1347871Z distributed/fsdp/test_wrap 2022-05-02T21:26:55.1348369Z distributed/fsdp/test_fsdp_grad_acc 2022-05-02T21:26:55.1348679Z distributed/algorithms/test_join 2022-05-02T21:26:55.1349004Z distributed/fsdp/test_fsdp_freezing_weights 2022-05-02T21:26:55.1349305Z distributed/fsdp/test_fsdp_comm 2022-05-02T21:26:55.1349593Z distributed/test_c10d_common 2022-05-02T21:26:55.1349885Z distributed/fsdp/test_fsdp_meta 2022-05-02T21:26:55.1350171Z distributed/fsdp/test_fsdp_exec_order 2022-05-02T21:26:55.1350486Z distributed/fsdp/test_fsdp_checkpoint 2022-05-02T21:26:55.1350798Z distributed/fsdp/test_fsdp_overlap 2022-05-02T21:26:55.1351105Z distributed/elastic/timer/local_timer_example 2022-05-02T21:26:55.1351423Z distributed/fsdp/test_fsdp_input 2022-05-02T21:26:55.1351749Z distributed/_shard/sharded_tensor/ops/test_init 2022-05-02T21:26:55.1352190Z distributed/elastic/timer/local_timer_test 2022-05-02T21:26:55.1352520Z distributed/elastic/utils/distributed_test 2022-05-02T21:26:55.1352841Z distributed/fsdp/test_fsdp_pure_fp16 2022-05-02T21:26:55.1353150Z distributed/test_data_parallel 2022-05-02T21:26:55.1353437Z distributed/fsdp/test_fsdp_traversal 2022-05-02T21:26:55.1353792Z distributed/_shard/sharded_tensor/test_sharded_tensor_reshard 2022-05-02T21:26:55.1354174Z distributed/_shard/sharded_tensor/ops/test_embedding_bag 2022-05-02T21:26:55.1354534Z distributed/_shard/sharded_tensor/test_megatron_prototype 2022-05-02T21:26:55.1354858Z distributed/test_launcher 2022-05-02T21:26:55.1355149Z distributed/elastic/utils/util_test 2022-05-02T21:26:55.1355441Z distributed/elastic/utils/logging_test 2022-05-02T21:26:55.1355755Z distributed/elastic/metrics/api_test 2022-05-02T21:26:55.1356095Z distributed/_shard/sharded_tensor/ops/test_math_ops 2022-05-02T21:26:55.1356455Z distributed/_shard/test_replicated_tensor 2022-05-02T21:26:55.1356754Z distributed/elastic/events/lib_test 2022-05-02T21:26:55.1357065Z distributed/fsdp/test_shard_utils 2022-05-02T21:26:55.1357387Z distributed/pipeline/sync/skip/test_gpipe 2022-05-02T21:26:55.1357702Z distributed/pipeline/sync/skip/test_leak 2022-05-02T21:26:55.1358040Z distributed/pipeline/sync/skip/test_stash_pop 2022-05-02T21:26:55.1358396Z distributed/pipeline/sync/skip/test_verify_skippables 2022-05-02T21:26:55.1358716Z distributed/pipeline/sync/test_bugs 2022-05-02T21:26:55.1359027Z distributed/pipeline/sync/test_copy 2022-05-02T21:26:55.1359350Z distributed/pipeline/sync/test_dependency 2022-05-02T21:26:55.1359662Z distributed/pipeline/sync/test_microbatch 2022-05-02T21:26:55.1359983Z distributed/pipeline/sync/test_pipe 2022-05-02T21:26:55.1360299Z distributed/pipeline/sync/test_stream 2022-05-02T21:26:55.1360593Z distributed/pipeline/sync/test_worker 2022-05-02T21:26:55.1360912Z distributed/rpc/test_tensorpipe_agent ``` </details> Pull Request resolved: https://github.com/pytorch/pytorch/pull/76564 Approved by: https://github.com/jeffdaily, https://github.com/janeyx99
2022-05-03 23:01:42 +00:00
old_environ = dict(os.environ)
os.environ["TEMP_DIR"] = tmp_dir
os.environ["BACKEND"] = backend
2018-03-09 21:02:02 +00:00
os.environ.update(env_vars)
try:
os.mkdir(os.path.join(tmp_dir, "barrier"))
os.mkdir(os.path.join(tmp_dir, "test_dir"))
if backend == "mpi":
# test mpiexec for --noprefix option
with open(os.devnull, "w") as devnull:
allowrunasroot_opt = (
"--allow-run-as-root"
if subprocess.call(
'mpiexec --allow-run-as-root -n 1 bash -c ""',
shell=True,
stdout=devnull,
stderr=subprocess.STDOUT,
)
== 0
else ""
)
noprefix_opt = (
"--noprefix"
if subprocess.call(
f'mpiexec {allowrunasroot_opt} -n 1 --noprefix bash -c ""',
shell=True,
stdout=devnull,
stderr=subprocess.STDOUT,
)
== 0
else ""
)
mpiexec = ["mpiexec", "-n", "3", noprefix_opt, allowrunasroot_opt]
return_code = run_test(
test_module, test_directory, options, launcher_cmd=mpiexec
)
else:
return_code = run_test(
test_module,
test_directory,
options,
extra_unittest_args=["--subprocess"],
)
if return_code != 0:
return return_code
2018-03-09 21:02:02 +00:00
finally:
shutil.rmtree(tmp_dir)
shard `pull / linux-xenial-cuda11.3-py3.7-gcc7 / test (distributed` 1->2 Fixes #ISSUE_NUMBER shard `pull / linux-xenial-cuda11.3-py3.7-gcc7 / test (distributed ...` from 1 shard to 2 Pros: - It currently takes about 2.6 hours and is 3rd longest running job on pull - Theoretically minimal overhead Cons: - Requires changes to the run_test.py which might have correctness issues Notes: - Cannot shard further as one of the test files is responsible for about half of the total run time spreadsheet regarding sharding: https://docs.google.com/spreadsheets/d/1BdtVsjRr0Is9LXMNilR02FEdPXNq7zEWl8AmR3ArsLQ/edit#gid=1153012347 Test Plan: <details><summary>expand to see test plan (its long)</summary> tests from a commit ran on master (90 tests ran) ``` 2022-05-03T12:45:34.7974184Z Selected tests: 2022-05-03T12:45:34.7974495Z distributed/_shard/sharded_optim/test_sharded_optim 2022-05-03T12:45:34.7974839Z distributed/_shard/sharded_tensor/ops/test_binary_cmp 2022-05-03T12:45:34.7975209Z distributed/_shard/sharded_tensor/ops/test_elementwise_ops 2022-05-03T12:45:34.7975575Z distributed/_shard/sharded_tensor/ops/test_embedding 2022-05-03T12:45:34.7976180Z distributed/_shard/sharded_tensor/ops/test_embedding_bag 2022-05-03T12:45:34.7976802Z distributed/_shard/sharded_tensor/ops/test_init 2022-05-03T12:45:34.7977361Z distributed/_shard/sharded_tensor/ops/test_linear 2022-05-03T12:45:34.7978157Z distributed/_shard/sharded_tensor/ops/test_math_ops 2022-05-03T12:45:34.7978879Z distributed/_shard/sharded_tensor/test_megatron_prototype 2022-05-03T12:45:34.7979594Z distributed/_shard/sharded_tensor/test_sharded_tensor 2022-05-03T12:45:34.7980366Z distributed/_shard/sharded_tensor/test_sharded_tensor_reshard 2022-05-03T12:45:34.7981066Z distributed/_shard/sharding_plan/test_sharding_plan 2022-05-03T12:45:34.7981877Z distributed/_shard/sharding_spec/test_sharding_spec 2022-05-03T12:45:34.7982387Z distributed/_shard/test_partial_tensor 2022-05-03T12:45:34.7982691Z distributed/_shard/test_replicated_tensor 2022-05-03T12:45:34.7982994Z distributed/_shard/test_sharder 2022-05-03T12:45:34.7983280Z distributed/algorithms/test_join 2022-05-03T12:45:34.7983695Z distributed/elastic/events/lib_test 2022-05-03T12:45:34.7983984Z distributed/elastic/metrics/api_test 2022-05-03T12:45:34.7984308Z distributed/elastic/multiprocessing/api_test 2022-05-03T12:45:34.7984624Z distributed/elastic/timer/api_test 2022-05-03T12:45:34.7984924Z distributed/elastic/timer/local_timer_example 2022-05-03T12:45:34.7985254Z distributed/elastic/timer/local_timer_test 2022-05-03T12:45:34.7985575Z distributed/elastic/utils/distributed_test 2022-05-03T12:45:34.7985889Z distributed/elastic/utils/logging_test 2022-05-03T12:45:34.7986176Z distributed/elastic/utils/util_test 2022-05-03T12:45:34.7986492Z distributed/fsdp/test_flatten_params_wrapper 2022-05-03T12:45:34.7986799Z distributed/fsdp/test_fsdp_apply 2022-05-03T12:45:34.7987078Z distributed/fsdp/test_fsdp_checkpoint 2022-05-03T12:45:34.7987388Z distributed/fsdp/test_fsdp_clip_grad_norm 2022-05-03T12:45:34.7987691Z distributed/fsdp/test_fsdp_comm 2022-05-03T12:45:34.7987961Z distributed/fsdp/test_fsdp_core 2022-05-03T12:45:34.7988251Z distributed/fsdp/test_fsdp_exec_order 2022-05-03T12:45:34.7988570Z distributed/fsdp/test_fsdp_freezing_weights 2022-05-03T12:45:34.7988865Z distributed/fsdp/test_fsdp_grad_acc 2022-05-03T12:45:34.7989176Z distributed/fsdp/test_fsdp_ignored_modules 2022-05-03T12:45:34.7989478Z distributed/fsdp/test_fsdp_input 2022-05-03T12:45:34.7989950Z distributed/fsdp/test_fsdp_memory 2022-05-03T12:45:34.7990241Z distributed/fsdp/test_fsdp_meta 2022-05-03T12:45:34.7990640Z distributed/fsdp/test_fsdp_mixed_precision 2022-05-03T12:45:34.7990964Z distributed/fsdp/test_fsdp_multiple_forward 2022-05-03T12:45:34.7991293Z distributed/fsdp/test_fsdp_multiple_wrapping 2022-05-03T12:45:34.7991610Z distributed/fsdp/test_fsdp_optim_state 2022-05-03T12:45:34.7991895Z distributed/fsdp/test_fsdp_overlap 2022-05-03T12:45:34.7992195Z distributed/fsdp/test_fsdp_pure_fp16 2022-05-03T12:45:34.7992500Z distributed/fsdp/test_fsdp_state_dict 2022-05-03T12:45:34.7992818Z distributed/fsdp/test_fsdp_summon_full_params 2022-05-03T12:45:34.7993117Z distributed/fsdp/test_fsdp_traversal 2022-05-03T12:45:34.7993861Z distributed/fsdp/test_fsdp_uneven 2022-05-03T12:45:34.7994181Z distributed/fsdp/test_shard_utils 2022-05-03T12:45:34.7994447Z distributed/fsdp/test_utils 2022-05-03T12:45:34.7994721Z distributed/fsdp/test_wrap 2022-05-03T12:45:34.7995015Z distributed/nn/jit/test_instantiator 2022-05-03T12:45:34.7995328Z distributed/optim/test_zero_redundancy_optimizer 2022-05-03T12:45:34.7995664Z distributed/pipeline/sync/skip/test_api 2022-05-03T12:45:34.7995983Z distributed/pipeline/sync/skip/test_gpipe 2022-05-03T12:45:34.7996315Z distributed/pipeline/sync/skip/test_inspect_skip_layout 2022-05-03T12:45:34.7996652Z distributed/pipeline/sync/skip/test_leak 2022-05-03T12:45:34.7996977Z distributed/pipeline/sync/skip/test_portal 2022-05-03T12:45:34.7997292Z distributed/pipeline/sync/skip/test_stash_pop 2022-05-03T12:45:34.7997623Z distributed/pipeline/sync/skip/test_tracker 2022-05-03T12:45:34.7997968Z distributed/pipeline/sync/skip/test_verify_skippables 2022-05-03T12:45:34.7998301Z distributed/pipeline/sync/test_balance 2022-05-03T12:45:34.7998591Z distributed/pipeline/sync/test_bugs 2022-05-03T12:45:34.7998927Z distributed/pipeline/sync/test_checkpoint 2022-05-03T12:45:34.7999243Z distributed/pipeline/sync/test_copy 2022-05-03T12:45:34.7999557Z distributed/pipeline/sync/test_deferred_batch_norm 2022-05-03T12:45:34.7999896Z distributed/pipeline/sync/test_dependency 2022-05-03T12:45:34.8000215Z distributed/pipeline/sync/test_inplace 2022-05-03T12:45:34.8000516Z distributed/pipeline/sync/test_microbatch 2022-05-03T12:45:34.8000826Z distributed/pipeline/sync/test_phony 2022-05-03T12:45:34.8001130Z distributed/pipeline/sync/test_pipe 2022-05-03T12:45:34.8001424Z distributed/pipeline/sync/test_pipeline 2022-05-03T12:45:34.8001733Z distributed/pipeline/sync/test_stream 2022-05-03T12:45:34.8002055Z distributed/pipeline/sync/test_transparency 2022-05-03T12:45:34.8002353Z distributed/pipeline/sync/test_worker 2022-05-03T12:45:34.8002672Z distributed/rpc/cuda/test_tensorpipe_agent 2022-05-03T12:45:34.8002982Z distributed/rpc/test_faulty_agent 2022-05-03T12:45:34.8003270Z distributed/rpc/test_tensorpipe_agent 2022-05-03T12:45:34.8003568Z distributed/test_c10d_common 2022-05-03T12:45:34.8003839Z distributed/test_c10d_gloo 2022-05-03T12:45:34.8004088Z distributed/test_c10d_nccl 2022-05-03T12:45:34.8004369Z distributed/test_c10d_spawn_gloo 2022-05-03T12:45:34.8004656Z distributed/test_c10d_spawn_nccl 2022-05-03T12:45:34.8004938Z distributed/test_data_parallel 2022-05-03T12:45:34.8005212Z distributed/test_distributed_spawn 2022-05-03T12:45:34.8005496Z distributed/test_launcher 2022-05-03T12:45:34.8005767Z distributed/test_nccl 2022-05-03T12:45:34.8006019Z distributed/test_pg_wrapper 2022-05-03T12:45:34.8006285Z distributed/test_store ``` tests ran on first shard for distributed on this PR (34 tests) ``` 2022-05-02T21:26:00.1385256Z Selected tests: 2022-05-02T21:26:00.1385767Z distributed/test_distributed_spawn 2022-05-02T21:26:00.1386403Z distributed/elastic/multiprocessing/api_test 2022-05-02T21:26:00.1387051Z distributed/fsdp/test_fsdp_memory 2022-05-02T21:26:00.1387607Z distributed/fsdp/test_fsdp_ignored_modules 2022-05-02T21:26:00.1388179Z distributed/fsdp/test_fsdp_apply 2022-05-02T21:26:00.1388600Z distributed/_shard/sharded_tensor/ops/test_binary_cmp 2022-05-02T21:26:00.1389181Z distributed/_shard/sharding_spec/test_sharding_spec 2022-05-02T21:26:00.1389545Z distributed/_shard/sharded_tensor/ops/test_linear 2022-05-02T21:26:00.1389878Z distributed/fsdp/test_fsdp_uneven 2022-05-02T21:26:00.1390186Z distributed/fsdp/test_fsdp_multiple_wrapping 2022-05-02T21:26:00.1390526Z distributed/fsdp/test_fsdp_multiple_forward 2022-05-02T21:26:00.1390877Z distributed/_shard/sharded_tensor/ops/test_embedding 2022-05-02T21:26:00.1391219Z distributed/_shard/test_partial_tensor 2022-05-02T21:26:00.1391542Z distributed/_shard/sharded_optim/test_sharded_optim 2022-05-02T21:26:00.1391915Z distributed/_shard/sharded_tensor/ops/test_elementwise_ops 2022-05-02T21:26:00.1392297Z distributed/fsdp/test_flatten_params_wrapper 2022-05-02T21:26:00.1392585Z distributed/fsdp/test_utils 2022-05-02T21:26:00.1392883Z distributed/nn/jit/test_instantiator 2022-05-02T21:26:00.1393167Z distributed/test_nccl 2022-05-02T21:26:00.1393466Z distributed/_shard/sharding_plan/test_sharding_plan 2022-05-02T21:26:00.1393787Z distributed/_shard/test_sharder 2022-05-02T21:26:00.1394085Z distributed/elastic/timer/api_test 2022-05-02T21:26:00.1394383Z distributed/pipeline/sync/skip/test_api 2022-05-02T21:26:00.1394738Z distributed/pipeline/sync/skip/test_inspect_skip_layout 2022-05-02T21:26:00.1395090Z distributed/pipeline/sync/skip/test_portal 2022-05-02T21:26:00.1395424Z distributed/pipeline/sync/skip/test_tracker 2022-05-02T21:26:00.1395935Z distributed/pipeline/sync/test_balance 2022-05-02T21:26:00.1396288Z distributed/pipeline/sync/test_checkpoint 2022-05-02T21:26:00.1396635Z distributed/pipeline/sync/test_deferred_batch_norm 2022-05-02T21:26:00.1396953Z distributed/pipeline/sync/test_inplace 2022-05-02T21:26:00.1397269Z distributed/pipeline/sync/test_phony 2022-05-02T21:26:00.1397587Z distributed/pipeline/sync/test_pipeline 2022-05-02T21:26:00.1397903Z distributed/pipeline/sync/test_transparency 2022-05-02T21:26:00.1398221Z distributed/rpc/test_faulty_agent ``` tests ran on second shard for distributed on this PR (56 tests) ``` 2022-05-02T21:26:55.1342892Z Selected tests: 2022-05-02T21:26:55.1343201Z distributed/rpc/cuda/test_tensorpipe_agent 2022-05-02T21:26:55.1343526Z distributed/fsdp/test_fsdp_core 2022-05-02T21:26:55.1343829Z distributed/test_c10d_nccl 2022-05-02T21:26:55.1344089Z distributed/test_c10d_gloo 2022-05-02T21:26:55.1344408Z distributed/fsdp/test_fsdp_summon_full_params 2022-05-02T21:26:55.1344749Z distributed/fsdp/test_fsdp_mixed_precision 2022-05-02T21:26:55.1345085Z distributed/optim/test_zero_redundancy_optimizer 2022-05-02T21:26:55.1345423Z distributed/fsdp/test_fsdp_optim_state 2022-05-02T21:26:55.1345773Z distributed/_shard/sharded_tensor/test_sharded_tensor 2022-05-02T21:26:55.1346088Z distributed/fsdp/test_fsdp_state_dict 2022-05-02T21:26:55.1346379Z distributed/test_store 2022-05-02T21:26:55.1346661Z distributed/test_c10d_spawn_gloo 2022-05-02T21:26:55.1346966Z distributed/test_pg_wrapper 2022-05-02T21:26:55.1347252Z distributed/test_c10d_spawn_nccl 2022-05-02T21:26:55.1347565Z distributed/fsdp/test_fsdp_clip_grad_norm 2022-05-02T21:26:55.1347871Z distributed/fsdp/test_wrap 2022-05-02T21:26:55.1348369Z distributed/fsdp/test_fsdp_grad_acc 2022-05-02T21:26:55.1348679Z distributed/algorithms/test_join 2022-05-02T21:26:55.1349004Z distributed/fsdp/test_fsdp_freezing_weights 2022-05-02T21:26:55.1349305Z distributed/fsdp/test_fsdp_comm 2022-05-02T21:26:55.1349593Z distributed/test_c10d_common 2022-05-02T21:26:55.1349885Z distributed/fsdp/test_fsdp_meta 2022-05-02T21:26:55.1350171Z distributed/fsdp/test_fsdp_exec_order 2022-05-02T21:26:55.1350486Z distributed/fsdp/test_fsdp_checkpoint 2022-05-02T21:26:55.1350798Z distributed/fsdp/test_fsdp_overlap 2022-05-02T21:26:55.1351105Z distributed/elastic/timer/local_timer_example 2022-05-02T21:26:55.1351423Z distributed/fsdp/test_fsdp_input 2022-05-02T21:26:55.1351749Z distributed/_shard/sharded_tensor/ops/test_init 2022-05-02T21:26:55.1352190Z distributed/elastic/timer/local_timer_test 2022-05-02T21:26:55.1352520Z distributed/elastic/utils/distributed_test 2022-05-02T21:26:55.1352841Z distributed/fsdp/test_fsdp_pure_fp16 2022-05-02T21:26:55.1353150Z distributed/test_data_parallel 2022-05-02T21:26:55.1353437Z distributed/fsdp/test_fsdp_traversal 2022-05-02T21:26:55.1353792Z distributed/_shard/sharded_tensor/test_sharded_tensor_reshard 2022-05-02T21:26:55.1354174Z distributed/_shard/sharded_tensor/ops/test_embedding_bag 2022-05-02T21:26:55.1354534Z distributed/_shard/sharded_tensor/test_megatron_prototype 2022-05-02T21:26:55.1354858Z distributed/test_launcher 2022-05-02T21:26:55.1355149Z distributed/elastic/utils/util_test 2022-05-02T21:26:55.1355441Z distributed/elastic/utils/logging_test 2022-05-02T21:26:55.1355755Z distributed/elastic/metrics/api_test 2022-05-02T21:26:55.1356095Z distributed/_shard/sharded_tensor/ops/test_math_ops 2022-05-02T21:26:55.1356455Z distributed/_shard/test_replicated_tensor 2022-05-02T21:26:55.1356754Z distributed/elastic/events/lib_test 2022-05-02T21:26:55.1357065Z distributed/fsdp/test_shard_utils 2022-05-02T21:26:55.1357387Z distributed/pipeline/sync/skip/test_gpipe 2022-05-02T21:26:55.1357702Z distributed/pipeline/sync/skip/test_leak 2022-05-02T21:26:55.1358040Z distributed/pipeline/sync/skip/test_stash_pop 2022-05-02T21:26:55.1358396Z distributed/pipeline/sync/skip/test_verify_skippables 2022-05-02T21:26:55.1358716Z distributed/pipeline/sync/test_bugs 2022-05-02T21:26:55.1359027Z distributed/pipeline/sync/test_copy 2022-05-02T21:26:55.1359350Z distributed/pipeline/sync/test_dependency 2022-05-02T21:26:55.1359662Z distributed/pipeline/sync/test_microbatch 2022-05-02T21:26:55.1359983Z distributed/pipeline/sync/test_pipe 2022-05-02T21:26:55.1360299Z distributed/pipeline/sync/test_stream 2022-05-02T21:26:55.1360593Z distributed/pipeline/sync/test_worker 2022-05-02T21:26:55.1360912Z distributed/rpc/test_tensorpipe_agent ``` </details> Pull Request resolved: https://github.com/pytorch/pytorch/pull/76564 Approved by: https://github.com/jeffdaily, https://github.com/janeyx99
2022-05-03 23:01:42 +00:00
os.environ.clear()
os.environ.update(old_environ)
return 0
2018-03-09 21:02:02 +00:00
def run_doctests(test_module, test_directory, options):
"""
Assumes the incoming test module is called doctest, and simply executes the
xdoctest runner on the torch library itself.
"""
import xdoctest
pkgpath = Path(torch.__file__).parent
exclude_module_list = ["torch._vendor.*"]
enabled = {
# TODO: expose these options to the user
# For now disable all feature-conditional tests
# 'lapack': 'auto',
# 'cuda': 'auto',
# 'cuda1': 'auto',
# 'qengine': 'auto',
"lapack": 0,
"cuda": 0,
"cuda1": 0,
"qengine": 0,
"autograd_profiler": 0,
"cpp_ext": 0,
"monitor": 0,
"onnx": "auto",
}
# Resolve "auto" based on a test to determine if the feature is available.
if enabled["cuda"] == "auto" and torch.cuda.is_available():
enabled["cuda"] = True
if (
enabled["cuda1"] == "auto"
and torch.cuda.is_available()
and torch.cuda.device_count() > 1
):
enabled["cuda1"] = True
if enabled["lapack"] == "auto" and torch._C.has_lapack:
enabled["lapack"] = True
if enabled["qengine"] == "auto":
try:
# Is there a better check if quantization is enabled?
import torch.ao.nn.quantized as nnq # NOQA: F401
torch.backends.quantized.engine = "qnnpack"
torch.backends.quantized.engine = "fbgemm"
except (ImportError, RuntimeError):
...
else:
enabled["qengine"] = True
if enabled["onnx"] == "auto":
try:
import onnx # NOQA: F401
import onnxruntime # NOQA: F401
import onnxscript # NOQA: F401
except ImportError:
exclude_module_list.append("torch.onnx.*")
enabled["onnx"] = False
else:
enabled["onnx"] = True
# Set doctest environment variables
if enabled["cuda"]:
os.environ["TORCH_DOCTEST_CUDA"] = "1"
if enabled["cuda1"]:
os.environ["TORCH_DOCTEST_CUDA1"] = "1"
if enabled["lapack"]:
os.environ["TORCH_DOCTEST_LAPACK"] = "1"
if enabled["qengine"]:
os.environ["TORCH_DOCTEST_QENGINE"] = "1"
if enabled["autograd_profiler"]:
os.environ["TORCH_DOCTEST_AUTOGRAD_PROFILER"] = "1"
if enabled["cpp_ext"]:
os.environ["TORCH_DOCTEST_CPP_EXT"] = "1"
if enabled["monitor"]:
os.environ["TORCH_DOCTEST_MONITOR"] = "1"
if enabled["onnx"]:
os.environ["TORCH_DOCTEST_ONNX"] = "1"
if torch.mps.is_available():
os.environ["TORCH_DOCTEST_MPS"] = "1"
if 0:
# TODO: could try to enable some of these
os.environ["TORCH_DOCTEST_QUANTIZED_DYNAMIC"] = "1"
os.environ["TORCH_DOCTEST_ANOMALY"] = "1"
os.environ["TORCH_DOCTEST_AUTOGRAD"] = "1"
os.environ["TORCH_DOCTEST_HUB"] = "1"
os.environ["TORCH_DOCTEST_DATALOADER"] = "1"
os.environ["TORCH_DOCTEST_FUTURES"] = "1"
pkgpath = os.path.dirname(torch.__file__)
xdoctest_config = {
"global_exec": r"\n".join(
[
"from torch import nn",
"import torch.nn.functional as F",
"import torch",
]
),
"analysis": "static", # set to "auto" to test doctests in compiled modules
"style": "google",
"options": "+IGNORE_WHITESPACE",
}
xdoctest_verbose = max(1, options.verbose)
run_summary = xdoctest.runner.doctest_module(
os.fspath(pkgpath),
config=xdoctest_config,
verbose=xdoctest_verbose,
command=options.xdoctest_command,
argv=[],
exclude=exclude_module_list,
)
result = 1 if run_summary.get("n_failed", 0) else 0
return result
def sanitize_file_name(file: str):
return file.replace("\\", ".").replace("/", ".").replace(" ", "_")
def handle_log_file(
test: ShardedTest, file_path: str, failed: bool, was_rerun: bool
) -> None:
test = str(test)
with open(file_path, errors="ignore") as f:
full_text = f.read()
new_file = "test/test-reports/" + sanitize_file_name(
f"{test}_{os.urandom(8).hex()}_.log"
)
os.rename(file_path, REPO_ROOT / new_file)
if not failed and not was_rerun and "=== RERUNS ===" not in full_text:
# If success + no retries (idk how else to check for test level retries
# other than reparse xml), print only what tests ran
print_to_stderr(
f"\n{test} was successful, full logs can be found in artifacts with path {new_file}"
)
for line in full_text.splitlines():
if re.search("Running .* items in this shard:", line):
print_to_stderr(line.rstrip())
print_to_stderr("")
return
# otherwise: print entire file
print_to_stderr(f"\nPRINTING LOG FILE of {test} ({new_file})")
print_to_stderr(full_text)
print_to_stderr(f"FINISHED PRINTING LOG FILE of {test} ({new_file})\n")
def get_pytest_args(options, is_cpp_test=False, is_distributed_test=False):
if RERUN_DISABLED_TESTS:
Do not collect and skip non-disabled tests when rerunning disabled tests (#102107) The console log blows up to much when running in rerun disabled tests mode (x50) https://hud.pytorch.org/pytorch/pytorch/commit/e132f09e8878418fb98a4b76a441a324452354ec. Each log is around 1GB and the whole uncompressed logs is ~50GB. After compression, it will be around 1GB, still too big. The increase comes mainly from the multiple SKIPPED message for non-disabled tests, which is expected due to how SkipTest and pytest-flakyfinder currently work. I update `test/conftest.py` to completely ignore skipped tests when rerunning disabled test instead of collecting then skipping 50 tests each. The benefit of doing is is much more than I originally expect: * Rerun disabled tests jobs now finish in less than half an hour as they should be * Fix OOM runner crash because of too many collected tests * Fix verbosity issue as now only disabled tests are run x50 times. There are only few hundreds of them atm * Fix timed out issue when rerunning disabled distributed and ASAN tests. They are just too slow when running at x50 ### Testing When rerunning disabled tests https://github.com/pytorch/pytorch/actions/runs/5084508614, only disabled tests on the platform are run, for example `test_ops_jit` on https://ossci-raw-job-status.s3.amazonaws.com/log/13770164954 only ran 100 tests (`test_variant_consistency_jit_linalg_lu_cuda_float32` + `test_variant_consistency_jit_linalg_lu_factor_cuda_complex64`) x50. ``` Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'test_ops_jit.py', '--shard-id=1', '--num-shards=2', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '--sc=test_ops_jit_1', '--flake-finder', '--flake-runs=50', '--import-slow-tests', '--import-disabled-tests', '--rerun-disabled-tests'] ... [2023-05-25 21:32:49.763856] Expand the folded group to see the log file of test_ops_jit 2/2 ##[group]PRINTING LOG FILE of test_ops_jit 2/2 (/var/lib/jenkins/workspace/test/test-reports/test_ops_jit_h2wr_t2c.log) Test results will be stored in test-reports/python-pytest/test_ops_jit/test_ops_jit-51a83bd44549074e.xml ============================= test session starts ============================== platform linux -- Python 3.10.11, pytest-7.3.1, pluggy-1.0.0 -- /opt/conda/envs/py_3.10/bin/python cachedir: .pytest_cache hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow] rootdir: /var/lib/jenkins/workspace configfile: pytest.ini plugins: hypothesis-5.35.1, cpp-2.3.0, flakefinder-1.1.0, rerunfailures-11.1.2, shard-0.1.2, xdist-3.3.0, xdoctest-1.1.0 collecting ... collected 1084 items Running 100 items in this shard: test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_linalg_lu_cuda_float32 (x50), test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_linalg_lu_factor_cuda_complex64 (x50) stepcurrent: Cannot find last run test, not skipping test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_linalg_lu_cuda_float32 PASSED [2.1876s] [ 1%] test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_linalg_lu_factor_cuda_complex64 PASSED [4.5615s] [ 2%] ``` * [pull](https://github.com/pytorch/pytorch/actions/runs/5093566864) * [trunk](https://github.com/pytorch/pytorch/actions/runs/5095364311) * [periodic](https://github.com/pytorch/pytorch/actions/runs/5095378850) * [slow](https://github.com/pytorch/pytorch/actions/runs/5095390285) Pull Request resolved: https://github.com/pytorch/pytorch/pull/102107 Approved by: https://github.com/clee2000, https://github.com/malfet
2023-05-27 12:10:32 +00:00
# Distributed tests are too slow, so running them x50 will cause the jobs to timeout after
# 3+ hours. So, let's opt for less number of reruns. We need at least 150 instances of the
# test every 2 weeks to satisfy the SQL query (15 x 14 = 210). The same logic applies
Do not collect and skip non-disabled tests when rerunning disabled tests (#102107) The console log blows up to much when running in rerun disabled tests mode (x50) https://hud.pytorch.org/pytorch/pytorch/commit/e132f09e8878418fb98a4b76a441a324452354ec. Each log is around 1GB and the whole uncompressed logs is ~50GB. After compression, it will be around 1GB, still too big. The increase comes mainly from the multiple SKIPPED message for non-disabled tests, which is expected due to how SkipTest and pytest-flakyfinder currently work. I update `test/conftest.py` to completely ignore skipped tests when rerunning disabled test instead of collecting then skipping 50 tests each. The benefit of doing is is much more than I originally expect: * Rerun disabled tests jobs now finish in less than half an hour as they should be * Fix OOM runner crash because of too many collected tests * Fix verbosity issue as now only disabled tests are run x50 times. There are only few hundreds of them atm * Fix timed out issue when rerunning disabled distributed and ASAN tests. They are just too slow when running at x50 ### Testing When rerunning disabled tests https://github.com/pytorch/pytorch/actions/runs/5084508614, only disabled tests on the platform are run, for example `test_ops_jit` on https://ossci-raw-job-status.s3.amazonaws.com/log/13770164954 only ran 100 tests (`test_variant_consistency_jit_linalg_lu_cuda_float32` + `test_variant_consistency_jit_linalg_lu_factor_cuda_complex64`) x50. ``` Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'test_ops_jit.py', '--shard-id=1', '--num-shards=2', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '--sc=test_ops_jit_1', '--flake-finder', '--flake-runs=50', '--import-slow-tests', '--import-disabled-tests', '--rerun-disabled-tests'] ... [2023-05-25 21:32:49.763856] Expand the folded group to see the log file of test_ops_jit 2/2 ##[group]PRINTING LOG FILE of test_ops_jit 2/2 (/var/lib/jenkins/workspace/test/test-reports/test_ops_jit_h2wr_t2c.log) Test results will be stored in test-reports/python-pytest/test_ops_jit/test_ops_jit-51a83bd44549074e.xml ============================= test session starts ============================== platform linux -- Python 3.10.11, pytest-7.3.1, pluggy-1.0.0 -- /opt/conda/envs/py_3.10/bin/python cachedir: .pytest_cache hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow] rootdir: /var/lib/jenkins/workspace configfile: pytest.ini plugins: hypothesis-5.35.1, cpp-2.3.0, flakefinder-1.1.0, rerunfailures-11.1.2, shard-0.1.2, xdist-3.3.0, xdoctest-1.1.0 collecting ... collected 1084 items Running 100 items in this shard: test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_linalg_lu_cuda_float32 (x50), test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_linalg_lu_factor_cuda_complex64 (x50) stepcurrent: Cannot find last run test, not skipping test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_linalg_lu_cuda_float32 PASSED [2.1876s] [ 1%] test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_linalg_lu_factor_cuda_complex64 PASSED [4.5615s] [ 2%] ``` * [pull](https://github.com/pytorch/pytorch/actions/runs/5093566864) * [trunk](https://github.com/pytorch/pytorch/actions/runs/5095364311) * [periodic](https://github.com/pytorch/pytorch/actions/runs/5095378850) * [slow](https://github.com/pytorch/pytorch/actions/runs/5095390285) Pull Request resolved: https://github.com/pytorch/pytorch/pull/102107 Approved by: https://github.com/clee2000, https://github.com/malfet
2023-05-27 12:10:32 +00:00
# to ASAN, which is also slow
count = 15 if is_distributed_test or TEST_WITH_ASAN else 50
# When under rerun-disabled-tests mode, run the same tests multiple times to determine their
# flakiness status. Default to 50 re-runs
Do not collect and skip non-disabled tests when rerunning disabled tests (#102107) The console log blows up to much when running in rerun disabled tests mode (x50) https://hud.pytorch.org/pytorch/pytorch/commit/e132f09e8878418fb98a4b76a441a324452354ec. Each log is around 1GB and the whole uncompressed logs is ~50GB. After compression, it will be around 1GB, still too big. The increase comes mainly from the multiple SKIPPED message for non-disabled tests, which is expected due to how SkipTest and pytest-flakyfinder currently work. I update `test/conftest.py` to completely ignore skipped tests when rerunning disabled test instead of collecting then skipping 50 tests each. The benefit of doing is is much more than I originally expect: * Rerun disabled tests jobs now finish in less than half an hour as they should be * Fix OOM runner crash because of too many collected tests * Fix verbosity issue as now only disabled tests are run x50 times. There are only few hundreds of them atm * Fix timed out issue when rerunning disabled distributed and ASAN tests. They are just too slow when running at x50 ### Testing When rerunning disabled tests https://github.com/pytorch/pytorch/actions/runs/5084508614, only disabled tests on the platform are run, for example `test_ops_jit` on https://ossci-raw-job-status.s3.amazonaws.com/log/13770164954 only ran 100 tests (`test_variant_consistency_jit_linalg_lu_cuda_float32` + `test_variant_consistency_jit_linalg_lu_factor_cuda_complex64`) x50. ``` Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'test_ops_jit.py', '--shard-id=1', '--num-shards=2', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '--sc=test_ops_jit_1', '--flake-finder', '--flake-runs=50', '--import-slow-tests', '--import-disabled-tests', '--rerun-disabled-tests'] ... [2023-05-25 21:32:49.763856] Expand the folded group to see the log file of test_ops_jit 2/2 ##[group]PRINTING LOG FILE of test_ops_jit 2/2 (/var/lib/jenkins/workspace/test/test-reports/test_ops_jit_h2wr_t2c.log) Test results will be stored in test-reports/python-pytest/test_ops_jit/test_ops_jit-51a83bd44549074e.xml ============================= test session starts ============================== platform linux -- Python 3.10.11, pytest-7.3.1, pluggy-1.0.0 -- /opt/conda/envs/py_3.10/bin/python cachedir: .pytest_cache hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow] rootdir: /var/lib/jenkins/workspace configfile: pytest.ini plugins: hypothesis-5.35.1, cpp-2.3.0, flakefinder-1.1.0, rerunfailures-11.1.2, shard-0.1.2, xdist-3.3.0, xdoctest-1.1.0 collecting ... collected 1084 items Running 100 items in this shard: test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_linalg_lu_cuda_float32 (x50), test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_linalg_lu_factor_cuda_complex64 (x50) stepcurrent: Cannot find last run test, not skipping test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_linalg_lu_cuda_float32 PASSED [2.1876s] [ 1%] test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_linalg_lu_factor_cuda_complex64 PASSED [4.5615s] [ 2%] ``` * [pull](https://github.com/pytorch/pytorch/actions/runs/5093566864) * [trunk](https://github.com/pytorch/pytorch/actions/runs/5095364311) * [periodic](https://github.com/pytorch/pytorch/actions/runs/5095378850) * [slow](https://github.com/pytorch/pytorch/actions/runs/5095390285) Pull Request resolved: https://github.com/pytorch/pytorch/pull/102107 Approved by: https://github.com/clee2000, https://github.com/malfet
2023-05-27 12:10:32 +00:00
rerun_options = ["--flake-finder", f"--flake-runs={count}"]
else:
# When under the normal mode, retry a failed test 2 more times. -x means stop at the first
# failure
rerun_options = ["-x", "--reruns=2"]
pytest_args = [
"-vv",
"-rfEX",
]
Discover and run C++ tests with run_test.py (#99559) This depends on [pytest-cpp](https://github.com/pytest-dev/pytest-cpp) to discover and run C++ tests with pytest. C++ tests are built under `${WORKSPACE}/build/bin` directory and copied to the test job under the same path. * To expose them to `run_test`, I choose to use the mock path prefix `cpp`, for example `build/bin/c10_Array_test` would be named as `cpp/c10_Array_test` and the `python test/run_test.py --cpp -i cpp/c10_Array_test` would run the test in the same way as other Python tests. I could copy them from `build/bin` to `test/cpp`, but it will be mixed with the source code and CMake file. So this looks easier * Some executable under `build/bin` are not C++ tests, and they are exclude, for example `build/bin/torch_shm_manager` * C++ tests need to run with pytest directly as python command doesn't understand it * The change is gated by the new `--cpp` argument to `run_test.py`, for example `python test/run_test.py --cpp` will run all available C++ tests * The tests can be run in parallel * Failing tests can be retried with `--reruns=2` and `--sw` ``` ============================= test session starts ============================== platform darwin -- Python 3.9.15, pytest-7.2.0, pluggy-1.0.0 -- /Users/huydo/miniconda3/envs/py3.9/bin/python3 cachedir: .pytest_cache hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/Users/huydo/Storage/mine/pytorch/test/.hypothesis/examples') rootdir: /Users/huydo/Storage/mine/pytorch, configfile: pytest.ini plugins: xdoctest-1.1.0, cpp-2.3.0, rerunfailures-10.3, shard-0.1.2, flakefinder-1.1.0, hypothesis-6.56.4, xdist-3.0.2, repeat-0.9.1 collecting ... collected 3 items / 2 deselected / 1 selected Running 1 items in this shard: build/bin/scalar_tensor_test::TestScalarTensor.TestScalarTensorMPS stepwise: skipping 2 already passed items. ../build/bin/scalar_tensor_test::TestScalarTensor::TestScalarTensorMPS RERUN [100%] ../build/bin/scalar_tensor_test::TestScalarTensor::TestScalarTensorMPS RERUN [100%] ../build/bin/scalar_tensor_test::TestScalarTensor::TestScalarTensorMPS FAILED [100%] ``` * `--import-slow-tests` and `--import-disabled-tests` won't work for now and that's ok to have it as a future task. I also add `pytest-cpp==2.3.0` to Linux Docker, MacOS, and Windows. ### Testing Build PyTorch and run `python test/run_test.py --cpp` on my laptop. CI change would come later in a separate PR. Also running `python test/run_test.py --help` now shows all C++ test discovered under `build/bin` Pull Request resolved: https://github.com/pytorch/pytorch/pull/99559 Approved by: https://github.com/clee2000
2023-04-22 00:23:31 +00:00
if not is_cpp_test:
# C++ tests need to be run with pytest directly, not via python
# We have a custom pytest shard that conflicts with the normal plugin
pytest_args.extend(["-p", "no:xdist", "--use-pytest"])
else:
# Use pytext-dist to run C++ tests in parallel as running them sequentially using run_test
# is much slower than running them directly
pytest_args.extend(["-n", str(NUM_PROCS)])
if IS_CI:
# Add the option to generate XML test report here as C++ tests
# won't go into common_utils
test_report_path = get_report_path(pytest=True)
pytest_args.extend(["--junit-xml-reruns", test_report_path])
Run C++ tests on CI with run_test.py (#99956) After https://github.com/pytorch/pytorch/pull/99559, we can now run C++ test with `run_test.py`. Although advance features such as `--import-slow-tests` and `--import-disabled-tests` won't work for now, there will still be a gain in reliability and performance as C++ can now be retried and run in parallel. This covers all C++ tests in the CI including aten, libtorch, and Vulkan C++ tests across all platforms Linux, Windows, MacOS. Notes: * To support C++ test discovery, the env variable `CPP_TESTS_DIR` can be set to where the C++ test binaries is located * Support pytest -k argument via run_test as this is used by pytest-cpp to replace `--gtest-filter` * The XML output is in pytest format, but it's ok now because we don't have slow test or flaky test support for C++ test yet * ~~I need to figure out why conftest.py doesn't work when I invoke pytest directly for C++ test, so `--sc` is not available for C++ tests at the moment. Proper pytest plugin like stepwise works fine though. I'll investigate and fix it in a separate PR~~ Found the cause, `conftest.py` is per directory and needs to be in any arbitrary directory that holds C++ test * Two tests `test_api` and `test_tensorexpr` timed out on ASAN, I suspect that ASAN is now used on top of the python executable, which is slower than running native C++ code. IMO, it's ok to run these tests as before on ASAN for now Pull Request resolved: https://github.com/pytorch/pytorch/pull/99956 Approved by: https://github.com/clee2000, https://github.com/ZainRizvi
2023-05-09 21:24:12 +00:00
if options.pytest_k_expr:
pytest_args.extend(["-k", options.pytest_k_expr])
Discover and run C++ tests with run_test.py (#99559) This depends on [pytest-cpp](https://github.com/pytest-dev/pytest-cpp) to discover and run C++ tests with pytest. C++ tests are built under `${WORKSPACE}/build/bin` directory and copied to the test job under the same path. * To expose them to `run_test`, I choose to use the mock path prefix `cpp`, for example `build/bin/c10_Array_test` would be named as `cpp/c10_Array_test` and the `python test/run_test.py --cpp -i cpp/c10_Array_test` would run the test in the same way as other Python tests. I could copy them from `build/bin` to `test/cpp`, but it will be mixed with the source code and CMake file. So this looks easier * Some executable under `build/bin` are not C++ tests, and they are exclude, for example `build/bin/torch_shm_manager` * C++ tests need to run with pytest directly as python command doesn't understand it * The change is gated by the new `--cpp` argument to `run_test.py`, for example `python test/run_test.py --cpp` will run all available C++ tests * The tests can be run in parallel * Failing tests can be retried with `--reruns=2` and `--sw` ``` ============================= test session starts ============================== platform darwin -- Python 3.9.15, pytest-7.2.0, pluggy-1.0.0 -- /Users/huydo/miniconda3/envs/py3.9/bin/python3 cachedir: .pytest_cache hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/Users/huydo/Storage/mine/pytorch/test/.hypothesis/examples') rootdir: /Users/huydo/Storage/mine/pytorch, configfile: pytest.ini plugins: xdoctest-1.1.0, cpp-2.3.0, rerunfailures-10.3, shard-0.1.2, flakefinder-1.1.0, hypothesis-6.56.4, xdist-3.0.2, repeat-0.9.1 collecting ... collected 3 items / 2 deselected / 1 selected Running 1 items in this shard: build/bin/scalar_tensor_test::TestScalarTensor.TestScalarTensorMPS stepwise: skipping 2 already passed items. ../build/bin/scalar_tensor_test::TestScalarTensor::TestScalarTensorMPS RERUN [100%] ../build/bin/scalar_tensor_test::TestScalarTensor::TestScalarTensorMPS RERUN [100%] ../build/bin/scalar_tensor_test::TestScalarTensor::TestScalarTensorMPS FAILED [100%] ``` * `--import-slow-tests` and `--import-disabled-tests` won't work for now and that's ok to have it as a future task. I also add `pytest-cpp==2.3.0` to Linux Docker, MacOS, and Windows. ### Testing Build PyTorch and run `python test/run_test.py --cpp` on my laptop. CI change would come later in a separate PR. Also running `python test/run_test.py --help` now shows all C++ test discovered under `build/bin` Pull Request resolved: https://github.com/pytorch/pytorch/pull/99559 Approved by: https://github.com/clee2000
2023-04-22 00:23:31 +00:00
pytest_args.extend(rerun_options)
return pytest_args
def run_ci_sanity_check(test: ShardedTest, test_directory, options):
assert (
test.name == "test_ci_sanity_check_fail"
), f"This handler only works for test_ci_sanity_check_fail, got {test.name}"
ret_code = run_test(test, test_directory, options, print_log=False)
# This test should fail
if ret_code != 1:
return 1
test_reports_dir = str(REPO_ROOT / "test/test-reports")
# Delete the log files and xmls generated by the test
for file in glob.glob(f"{test_reports_dir}/{test.name}*.log"):
os.remove(file)
for dirname in glob.glob(f"{test_reports_dir}/**/{test.name}"):
shutil.rmtree(dirname)
return 0
2018-03-09 21:02:02 +00:00
CUSTOM_HANDLERS = {
"test_cuda_primary_ctx": run_test_with_subprocess,
"test_cuda_nvml_based_avail": run_test_with_subprocess,
"test_cuda_trace": run_test_with_subprocess,
"test_cpp_extensions_aot_no_ninja": test_cpp_extensions_aot_no_ninja,
"test_cpp_extensions_aot_ninja": test_cpp_extensions_aot_ninja,
"distributed/test_distributed_spawn": test_distributed,
"distributed/algorithms/quantization/test_quantization": test_distributed,
"distributed/test_c10d_nccl": run_test_with_subprocess,
"distributed/test_c10d_gloo": run_test_with_subprocess,
"distributed/test_c10d_ucc": run_test_with_subprocess,
"distributed/test_c10d_common": run_test_with_subprocess,
"distributed/test_c10d_spawn_gloo": run_test_with_subprocess,
"distributed/test_c10d_spawn_nccl": run_test_with_subprocess,
"distributed/test_c10d_spawn_ucc": run_test_with_subprocess,
"distributed/test_store": run_test_with_subprocess,
"distributed/test_pg_wrapper": run_test_with_subprocess,
"distributed/rpc/test_faulty_agent": run_test_with_subprocess,
"distributed/rpc/test_tensorpipe_agent": run_test_with_subprocess,
"distributed/rpc/test_share_memory": run_test_with_subprocess,
"distributed/rpc/cuda/test_tensorpipe_agent": run_test_with_subprocess,
"doctests": run_doctests,
"test_ci_sanity_check_fail": run_ci_sanity_check,
"test_autoload_enable": test_autoload_enable,
"test_autoload_disable": test_autoload_disable,
"test_cpp_extensions_open_device_registration": run_test_with_openreg,
"test_transformers": run_test_with_openreg,
2018-03-09 21:02:02 +00:00
}
download test times during build to avoid race conditions (#81915) After https://github.com/pytorch/pytorch/pull/81116, we started pulling test times straight from the source instead of first downloading them in the build job and then having the test job take the build jobs version. This can cause an issues where different shards pull different versions of the file, leading to incorrect sharding (ex two shards running the same tests file on accident). This generally happens if the test jobs happen while the test times file is being updated (unlikely, but not impossible) or if someone reruns a test job the next day. In this PR, I return to the old method of downloading the test times file during the build job and having the test jobs pull from the build jobs uploaded artifacts. If there is no test times file in the build job's artifacts, we fall back to the default sharding plan. Notes: * script moved to a new file to avoid needing to import torch, which would require torch to be built, which can cause issues with asan * I got errors with asan (`ASan runtime does not come first in initial library list; you should either link runtime to your application or manually preload it with LD_PRELOAD.`), so I put the script at the beginning of the build ### Test Plan Verified that the number of tests ran in the pull and trunk workflows are similar to workflows run on master. Checked logs to see if artifacts were being used for sharding. Spot checked a few test configs to check that their lists of selected tests didn't overlap. Pull Request resolved: https://github.com/pytorch/pytorch/pull/81915 Approved by: https://github.com/huydhn
2022-07-28 16:35:01 +00:00
PYTEST_SKIP_RETRIES = {"test_public_bindings"}
2018-03-09 21:02:02 +00:00
def parse_args():
parser = argparse.ArgumentParser(
description="Run the PyTorch unit test suite",
epilog="where TESTS is any of: {}".format(", ".join(TESTS)),
formatter_class=argparse.RawTextHelpFormatter,
)
2018-03-09 21:02:02 +00:00
parser.add_argument(
"-v",
"--verbose",
action="count",
default=0,
Run C++ tests on CI with run_test.py (#99956) After https://github.com/pytorch/pytorch/pull/99559, we can now run C++ test with `run_test.py`. Although advance features such as `--import-slow-tests` and `--import-disabled-tests` won't work for now, there will still be a gain in reliability and performance as C++ can now be retried and run in parallel. This covers all C++ tests in the CI including aten, libtorch, and Vulkan C++ tests across all platforms Linux, Windows, MacOS. Notes: * To support C++ test discovery, the env variable `CPP_TESTS_DIR` can be set to where the C++ test binaries is located * Support pytest -k argument via run_test as this is used by pytest-cpp to replace `--gtest-filter` * The XML output is in pytest format, but it's ok now because we don't have slow test or flaky test support for C++ test yet * ~~I need to figure out why conftest.py doesn't work when I invoke pytest directly for C++ test, so `--sc` is not available for C++ tests at the moment. Proper pytest plugin like stepwise works fine though. I'll investigate and fix it in a separate PR~~ Found the cause, `conftest.py` is per directory and needs to be in any arbitrary directory that holds C++ test * Two tests `test_api` and `test_tensorexpr` timed out on ASAN, I suspect that ASAN is now used on top of the python executable, which is slower than running native C++ code. IMO, it's ok to run these tests as before on ASAN for now Pull Request resolved: https://github.com/pytorch/pytorch/pull/99956 Approved by: https://github.com/clee2000, https://github.com/ZainRizvi
2023-05-09 21:24:12 +00:00
help="Print verbose information and test-by-test results",
)
parser.add_argument(
"--showlocals",
action=argparse.BooleanOptionalAction,
default=strtobool(os.environ.get("TEST_SHOWLOCALS", "False")),
help="Show local variables in tracebacks (default: True)",
)
parser.add_argument("--jit", "--jit", action="store_true", help="run all jit tests")
parser.add_argument(
"--distributed-tests",
"--distributed-tests",
action="store_true",
Run C++ tests on CI with run_test.py (#99956) After https://github.com/pytorch/pytorch/pull/99559, we can now run C++ test with `run_test.py`. Although advance features such as `--import-slow-tests` and `--import-disabled-tests` won't work for now, there will still be a gain in reliability and performance as C++ can now be retried and run in parallel. This covers all C++ tests in the CI including aten, libtorch, and Vulkan C++ tests across all platforms Linux, Windows, MacOS. Notes: * To support C++ test discovery, the env variable `CPP_TESTS_DIR` can be set to where the C++ test binaries is located * Support pytest -k argument via run_test as this is used by pytest-cpp to replace `--gtest-filter` * The XML output is in pytest format, but it's ok now because we don't have slow test or flaky test support for C++ test yet * ~~I need to figure out why conftest.py doesn't work when I invoke pytest directly for C++ test, so `--sc` is not available for C++ tests at the moment. Proper pytest plugin like stepwise works fine though. I'll investigate and fix it in a separate PR~~ Found the cause, `conftest.py` is per directory and needs to be in any arbitrary directory that holds C++ test * Two tests `test_api` and `test_tensorexpr` timed out on ASAN, I suspect that ASAN is now used on top of the python executable, which is slower than running native C++ code. IMO, it's ok to run these tests as before on ASAN for now Pull Request resolved: https://github.com/pytorch/pytorch/pull/99956 Approved by: https://github.com/clee2000, https://github.com/ZainRizvi
2023-05-09 21:24:12 +00:00
help="Run all distributed tests",
)
parser.add_argument(
"--functorch",
"--functorch",
action="store_true",
help=(
"If this flag is present, we will only run functorch tests. "
"If this flag is not present, we will run all tests "
"(including functorch tests)."
),
)
parser.add_argument(
"--mps",
"--mps",
action="store_true",
help=("If this flag is present, we will only run test_mps and test_metal"),
)
parser.add_argument(
"--xpu",
"--xpu",
action="store_true",
help=("If this flag is present, we will run xpu tests except XPU_BLOCK_LIST"),
)
Discover and run C++ tests with run_test.py (#99559) This depends on [pytest-cpp](https://github.com/pytest-dev/pytest-cpp) to discover and run C++ tests with pytest. C++ tests are built under `${WORKSPACE}/build/bin` directory and copied to the test job under the same path. * To expose them to `run_test`, I choose to use the mock path prefix `cpp`, for example `build/bin/c10_Array_test` would be named as `cpp/c10_Array_test` and the `python test/run_test.py --cpp -i cpp/c10_Array_test` would run the test in the same way as other Python tests. I could copy them from `build/bin` to `test/cpp`, but it will be mixed with the source code and CMake file. So this looks easier * Some executable under `build/bin` are not C++ tests, and they are exclude, for example `build/bin/torch_shm_manager` * C++ tests need to run with pytest directly as python command doesn't understand it * The change is gated by the new `--cpp` argument to `run_test.py`, for example `python test/run_test.py --cpp` will run all available C++ tests * The tests can be run in parallel * Failing tests can be retried with `--reruns=2` and `--sw` ``` ============================= test session starts ============================== platform darwin -- Python 3.9.15, pytest-7.2.0, pluggy-1.0.0 -- /Users/huydo/miniconda3/envs/py3.9/bin/python3 cachedir: .pytest_cache hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/Users/huydo/Storage/mine/pytorch/test/.hypothesis/examples') rootdir: /Users/huydo/Storage/mine/pytorch, configfile: pytest.ini plugins: xdoctest-1.1.0, cpp-2.3.0, rerunfailures-10.3, shard-0.1.2, flakefinder-1.1.0, hypothesis-6.56.4, xdist-3.0.2, repeat-0.9.1 collecting ... collected 3 items / 2 deselected / 1 selected Running 1 items in this shard: build/bin/scalar_tensor_test::TestScalarTensor.TestScalarTensorMPS stepwise: skipping 2 already passed items. ../build/bin/scalar_tensor_test::TestScalarTensor::TestScalarTensorMPS RERUN [100%] ../build/bin/scalar_tensor_test::TestScalarTensor::TestScalarTensorMPS RERUN [100%] ../build/bin/scalar_tensor_test::TestScalarTensor::TestScalarTensorMPS FAILED [100%] ``` * `--import-slow-tests` and `--import-disabled-tests` won't work for now and that's ok to have it as a future task. I also add `pytest-cpp==2.3.0` to Linux Docker, MacOS, and Windows. ### Testing Build PyTorch and run `python test/run_test.py --cpp` on my laptop. CI change would come later in a separate PR. Also running `python test/run_test.py --help` now shows all C++ test discovered under `build/bin` Pull Request resolved: https://github.com/pytorch/pytorch/pull/99559 Approved by: https://github.com/clee2000
2023-04-22 00:23:31 +00:00
parser.add_argument(
"--cpp",
"--cpp",
action="store_true",
help=("If this flag is present, we will only run C++ tests"),
)
parser.add_argument(
"-core",
"--core",
action="store_true",
help="Only run core tests, or tests that validate PyTorch's ops, modules,"
"and autograd. They are defined by CORE_TEST_LIST.",
)
[ONNX] Run ONNX tests as part of standard run_test script (#99215) <!-- copilot:all --> ### <samp>🤖 Generated by Copilot at dcbf7e2</samp> ### Summary 📝🧹🚩 <!-- 1. 📝 for simplifying the `./scripts/onnx/test.sh` script 2. 🧹 for refactoring the `test/onnx/dynamo/test_exporter_api.py` file 3. 🚩 for adding the `--onnx` flag to `test/run_test.py` and updating the `TESTS` list --> This pull request improves the ONNX testing infrastructure in PyTorch by refactoring the test code, normalizing the scope names, adding a flag to run only the ONNX tests, and simplifying the test script. > _To export PyTorch models to ONNX_ > _We refactored some scripts and contexts_ > _We used `common_utils`_ > _And normalized the scopes_ > _And added a flag to run the tests_ ### Walkthrough * Simplify `./scripts/onnx/test.sh` to use `run_test.py` with `--onnx` flag instead of `pytest` ([link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-0017f5b22ae1329acb0f54af8d9811c9b6180a72dac70d7a5b89d7c23c958198L44-R46)) * Remove `onnx` test from `TESTS` list in `test/run_test.py` ([link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-e72503c9e3e8766e2d1bacf3fad7b88aa166e0e90a7e103e7df99357a35df8d7L127-R127)). Replace with `onnx_caffe2`. * Add `onnx/test_pytorch_onnx_onnxruntime_cuda` and `onnx/test_models` tests to `blocklisted_tests` list in `test/run_test.py` ([link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-e72503c9e3e8766e2d1bacf3fad7b88aa166e0e90a7e103e7df99357a35df8d7R154-R155)) * Add `ONNX_SERIAL_LIST` list to `test/run_test.py` to specify ONNX tests that must run serially ([link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-e72503c9e3e8766e2d1bacf3fad7b88aa166e0e90a7e103e7df99357a35df8d7R296-R301)) * Add `ONNX_TESTS` list to `test/run_test.py` to store all ONNX tests ([link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-e72503c9e3e8766e2d1bacf3fad7b88aa166e0e90a7e103e7df99357a35df8d7R370)) * Add `--onnx` flag to `parse_args` function in `test/run_test.py` to run only ONNX tests ([link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-e72503c9e3e8766e2d1bacf3fad7b88aa166e0e90a7e103e7df99357a35df8d7R920-R928)) * Include `ONNX_SERIAL_LIST` in `must_serial` function in `test/run_test.py` to run ONNX tests serially or parallelly based on memory usage ([link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-e72503c9e3e8766e2d1bacf3fad7b88aa166e0e90a7e103e7df99357a35df8d7R1120)) * Filter selected tests based on `--onnx` flag in `get_selected_tests` function in `test/run_test.py` to exclude non-ONNX tests ([link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-e72503c9e3e8766e2d1bacf3fad7b88aa166e0e90a7e103e7df99357a35df8d7R1158-R1165)) ### Other minor changes to accommodate this change * Replace `unittest` module with `common_utils.TestCase` in `test/onnx/dynamo/test_exporter_api.py` ([link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-4545f0c15c73ebe90a875e9bee6c5ca4b6b92fb1ed0ec5560d1568e0f6339d02L4), [link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-4545f0c15c73ebe90a875e9bee6c5ca4b6b92fb1ed0ec5560d1568e0f6339d02L29-R28), [link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-4545f0c15c73ebe90a875e9bee6c5ca4b6b92fb1ed0ec5560d1568e0f6339d02L71-R70), [link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-4545f0c15c73ebe90a875e9bee6c5ca4b6b92fb1ed0ec5560d1568e0f6339d02L147-R146)) * Import `TemporaryFileName` class from `common_utils` in `test/onnx/dynamo/test_exporter_api.py` ([link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-4545f0c15c73ebe90a875e9bee6c5ca4b6b92fb1ed0ec5560d1568e0f6339d02L19-R18)) * Use `common_utils.TemporaryFileName` instead of `TemporaryFileName` in `TestDynamoExportAPI` class in `test/onnx/dynamo/test_exporter_api.py` ([link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-4545f0c15c73ebe90a875e9bee6c5ca4b6b92fb1ed0ec5560d1568e0f6339d02L92-R91), [link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-4545f0c15c73ebe90a875e9bee6c5ca4b6b92fb1ed0ec5560d1568e0f6339d02L110-R109), [link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-4545f0c15c73ebe90a875e9bee6c5ca4b6b92fb1ed0ec5560d1568e0f6339d02L129-R128)) * Use `common_utils.run_tests` instead of `unittest.main` in `test/onnx/dynamo/test_exporter_api.py` ([link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-4545f0c15c73ebe90a875e9bee6c5ca4b6b92fb1ed0ec5560d1568e0f6339d02L155-R154)) * Add `re` module to `test/onnx/test_utility_funs.py` ([link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-da71d2c81c9dc7ac0c47ff086fded82e4edcb67ba0cd3d8b5c983d7467343bc7R6)) * Add `_remove_test_environment_prefix_from_scope_name` function to `test/onnx/test_utility_funs.py` to normalize scope names of ONNX nodes ([link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-da71d2c81c9dc7ac0c47ff086fded82e4edcb67ba0cd3d8b5c983d7467343bc7R32-R58)) * Use `_remove_test_environment_prefix_from_scope_name` function to compare scope names of ONNX nodes in `TestUtilityFuns` class in `test/onnx/test_utility_funs.py` ([link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-da71d2c81c9dc7ac0c47ff086fded82e4edcb67ba0cd3d8b5c983d7467343bc7L1099-R1133), [link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-da71d2c81c9dc7ac0c47ff086fded82e4edcb67ba0cd3d8b5c983d7467343bc7L1119-R1152), [link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-da71d2c81c9dc7ac0c47ff086fded82e4edcb67ba0cd3d8b5c983d7467343bc7L1170-R1188), [link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-da71d2c81c9dc7ac0c47ff086fded82e4edcb67ba0cd3d8b5c983d7467343bc7L1181-R1199), [link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-da71d2c81c9dc7ac0c47ff086fded82e4edcb67ba0cd3d8b5c983d7467343bc7L1220-R1239), [link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-da71d2c81c9dc7ac0c47ff086fded82e4edcb67ba0cd3d8b5c983d7467343bc7L1235-R1258)) Fixes #98626 Pull Request resolved: https://github.com/pytorch/pytorch/pull/99215 Approved by: https://github.com/huydhn, https://github.com/titaiwangms
2023-04-18 00:49:08 +00:00
parser.add_argument(
"--onnx",
"--onnx",
action="store_true",
help=(
"Only run ONNX tests, or tests that validate PyTorch's ONNX export. "
"If this flag is not present, we will exclude ONNX tests."
),
)
Run C++ tests on CI with run_test.py (#99956) After https://github.com/pytorch/pytorch/pull/99559, we can now run C++ test with `run_test.py`. Although advance features such as `--import-slow-tests` and `--import-disabled-tests` won't work for now, there will still be a gain in reliability and performance as C++ can now be retried and run in parallel. This covers all C++ tests in the CI including aten, libtorch, and Vulkan C++ tests across all platforms Linux, Windows, MacOS. Notes: * To support C++ test discovery, the env variable `CPP_TESTS_DIR` can be set to where the C++ test binaries is located * Support pytest -k argument via run_test as this is used by pytest-cpp to replace `--gtest-filter` * The XML output is in pytest format, but it's ok now because we don't have slow test or flaky test support for C++ test yet * ~~I need to figure out why conftest.py doesn't work when I invoke pytest directly for C++ test, so `--sc` is not available for C++ tests at the moment. Proper pytest plugin like stepwise works fine though. I'll investigate and fix it in a separate PR~~ Found the cause, `conftest.py` is per directory and needs to be in any arbitrary directory that holds C++ test * Two tests `test_api` and `test_tensorexpr` timed out on ASAN, I suspect that ASAN is now used on top of the python executable, which is slower than running native C++ code. IMO, it's ok to run these tests as before on ASAN for now Pull Request resolved: https://github.com/pytorch/pytorch/pull/99956 Approved by: https://github.com/clee2000, https://github.com/ZainRizvi
2023-05-09 21:24:12 +00:00
parser.add_argument(
"-k",
"--pytest-k-expr",
default="",
help="Pass to pytest as its -k expr argument",
)
parser.add_argument(
"-c",
"--coverage",
action="store_true",
help="enable coverage",
default=PYTORCH_COLLECT_COVERAGE,
)
2018-03-09 21:02:02 +00:00
parser.add_argument(
"-i",
"--include",
nargs="+",
choices=TestChoices(TESTS),
2018-03-09 21:02:02 +00:00
default=TESTS,
metavar="TESTS",
help="select a set of tests to include (defaults to ALL tests)."
" tests must be a part of the TESTS list defined in run_test.py",
)
2018-03-09 21:02:02 +00:00
parser.add_argument(
"-x",
"--exclude",
nargs="+",
2018-03-09 21:02:02 +00:00
choices=TESTS,
metavar="TESTS",
2018-03-09 21:02:02 +00:00
default=[],
help="select a set of tests to exclude",
)
2018-03-09 21:02:02 +00:00
parser.add_argument(
"--ignore-win-blocklist",
action="store_true",
help="always run blocklisted windows tests",
)
# NS: Disable target determination until it can be made more reliable
# parser.add_argument(
# "--determine-from",
# help="File of affected source filenames to determine which tests to run.",
# )
parser.add_argument(
"--continue-through-error",
"--keep-going",
action="store_true",
help="Runs the full test suite despite one of the tests failing",
default=strtobool(os.environ.get("CONTINUE_THROUGH_ERROR", "False")),
)
parser.add_argument(
"--pipe-logs",
action="store_true",
help="Print logs to output file while running tests. True if in CI and env var is not set",
default=IS_CI and not strtobool(os.environ.get("VERBOSE_TEST_LOGS", "False")),
)
parser.add_argument(
"--enable-timeout",
action="store_true",
help="Set a timeout based on the test times json file. Only works if there are test times available",
default=IS_CI and not strtobool(os.environ.get("NO_TEST_TIMEOUT", "False")),
)
parser.add_argument(
"--enable-td",
action="store_true",
help="Enables removing tests based on TD",
default=IS_CI
and (
TEST_WITH_CROSSREF
or TEST_WITH_ASAN
or (TEST_CONFIG == "distributed" and TEST_CUDA)
or (IS_WINDOWS and not TEST_CUDA)
or TEST_CONFIG == "nogpu_AVX512"
or TEST_CONFIG == "nogpu_NO_AVX2"
or TEST_CONFIG == "default"
)
and get_pr_number() is not None
and not strtobool(os.environ.get("NO_TD", "False"))
and not TEST_WITH_ROCM
and not IS_MACOS
and "xpu" not in BUILD_ENVIRONMENT
and "onnx" not in BUILD_ENVIRONMENT
and os.environ.get("GITHUB_WORKFLOW", "slow") in ("trunk", "pull"),
)
parser.add_argument(
"--shard",
nargs=2,
type=int,
help="runs a shard of the tests (taking into account other selections), e.g., "
"--shard 2 3 will break up the selected tests into 3 shards and run the tests "
"in the 2nd shard (the first number should not exceed the second)",
)
parser.add_argument(
"--exclude-jit-executor",
action="store_true",
help="exclude tests that are run for a specific jit config",
)
parser.add_argument(
"--exclude-torch-export-tests",
action="store_true",
help="exclude torch export tests",
)
parser.add_argument(
"--exclude-aot-dispatch-tests",
action="store_true",
help="exclude aot dispatch tests",
)
parser.add_argument(
"--exclude-distributed-tests",
action="store_true",
help="exclude distributed tests",
)
parser.add_argument(
"--exclude-inductor-tests",
action="store_true",
help="exclude inductor tests",
)
parser.add_argument(
"--dry-run",
action="store_true",
help="Only list the test that will run.",
)
parser.add_argument(
"--xdoctest-command",
default="all",
help=(
"Control the specific doctest action. "
"Use 'list' to simply parse doctests and check syntax. "
"Use 'all' to execute all doctests or specify a specific "
"doctest to run"
),
)
parser.add_argument(
"--no-translation-validation",
action="store_false",
help="Run tests without translation validation.",
)
parser.add_argument(
"--upload-artifacts-while-running",
action="store_true",
)
group = parser.add_mutually_exclusive_group()
group.add_argument(
"--dynamo",
action="store_true",
help="Run tests with TorchDynamo+EagerBackend turned on",
)
group.add_argument(
"--inductor",
action="store_true",
help="Run tests with TorchInductor turned on",
)
args, extra = parser.parse_known_args()
if "--" in extra:
extra.remove("--")
args.additional_args = extra
return args
2018-03-09 21:02:02 +00:00
def exclude_tests(
exclude_list, selected_tests, exclude_message=None, exact_match=False
):
for exclude_test in exclude_list:
Add option to use ninja to compile ahead-of-time cpp_extensions (#32495) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/32495 Background ------------------------------ Previously, ninja was used to compile+link inline cpp_extensions and ahead-of-time cpp_extensions were compiled with distutils. This PR adds the ability to compile (but not link) ahead-of-time cpp_extensions with ninja. The main motivation for this is to speed up cpp_extension builds: distutils does not make use of parallelism. With this PR, using the new option, on my machine, - torchvision compilation goes from 3m43s to 49s - nestedtensor compilation goes from 2m0s to 28s. User-facing changes ------------------------------ I added a `use_ninja` flag to BuildExtension. This defaults to `True`. When `use_ninja` is True: - it will attempt to use ninja. - If we cannot use ninja, then this throws a warning and falls back to distutils. - Situations we cannot use ninja: Windows (NYI, I'll open a new issue for this), if ninja cannot be found on the system. Implementation Details ------------------------------ This PR makes this change in two steps. Please me know if it would be easier to review this if I split this up into a stacked diff. Those changes are: 1) refactor _write_ninja_file to separate the policy (what compiler flags to pass) from the mechanism (how to write the ninja file and do compilation). 2) call _write_ninja_file and _run_ninja_build while building ahead-of-time cpp_extensions. These are only used to compile objects; distutils still handles the linking. Change 1: refactor _write_ninja_file to seperate policy from mechanism - I split _write_ninja_file into: _write_ninja_file and _write_ninja_file_to_build_library - I renamed _build_extension_module to _run_ninja_build Change 2: Call _write_ninja_file while building ahead-of-time cpp_extensions - _write_ninja_file_and_compile_objects calls _write_ninja_file to only build object files. - We monkey-patch distutils.CCompiler.compile to call _write_ninja_files_and_compile_objects - distutils still handles the linking step. The linking step is not a bottleneck so it was not a concern. - This change only works on unix-based systems. Our code for windows goes down a different codepath and I did not want to mess with that. - If a system does not support ninja, we raise a warning and fall back to the original compilation path. Test Plan ------------------------------ Adhoc testing - I built torchvision using pytorch master and printed out the build commands. Next, I used this branch to build torchvision and looked at the ninja file. I compared the ninja file with the build commands and asserted that they were functionally the same. - I repeated the above for pytorch/nestedtensor. PyTorch test suite - I split `test_cpp_extensions` into `test_cpp_extensions_aot` and `test_cpp_extensions_jit`. The AOT (ahead-of-time) version tests ahead-of-time and the JIT version tests just-in-time (not to be confused with TorchScript) - `test_cpp_extensions_aot` gets run TWICE by run_test.py, once with a module that was built with ninja, and once with a module that was built without ninja. - run_test.py asserts that when we are building with use_ninja=True, ninja is actually available on the system. Test Plan: Imported from OSS Differential Revision: D19730432 Pulled By: zou3519 fbshipit-source-id: 819590d01cf65e8da5a1e8019b8b3084792fee90
2020-02-06 02:44:19 +00:00
tests_copy = selected_tests[:]
for test in tests_copy:
if (
not exact_match and test.startswith(exclude_test)
) or test == exclude_test:
if exclude_message is not None:
print_to_stderr(f"Excluding {test} {exclude_message}")
selected_tests.remove(test)
return selected_tests
def must_serial(file: Union[str, ShardedTest]) -> bool:
if isinstance(file, ShardedTest):
file = file.name
return (
os.getenv("PYTORCH_TEST_RUN_EVERYTHING_IN_SERIAL", "0") == "1"
Do not collect and skip non-disabled tests when rerunning disabled tests (#102107) The console log blows up to much when running in rerun disabled tests mode (x50) https://hud.pytorch.org/pytorch/pytorch/commit/e132f09e8878418fb98a4b76a441a324452354ec. Each log is around 1GB and the whole uncompressed logs is ~50GB. After compression, it will be around 1GB, still too big. The increase comes mainly from the multiple SKIPPED message for non-disabled tests, which is expected due to how SkipTest and pytest-flakyfinder currently work. I update `test/conftest.py` to completely ignore skipped tests when rerunning disabled test instead of collecting then skipping 50 tests each. The benefit of doing is is much more than I originally expect: * Rerun disabled tests jobs now finish in less than half an hour as they should be * Fix OOM runner crash because of too many collected tests * Fix verbosity issue as now only disabled tests are run x50 times. There are only few hundreds of them atm * Fix timed out issue when rerunning disabled distributed and ASAN tests. They are just too slow when running at x50 ### Testing When rerunning disabled tests https://github.com/pytorch/pytorch/actions/runs/5084508614, only disabled tests on the platform are run, for example `test_ops_jit` on https://ossci-raw-job-status.s3.amazonaws.com/log/13770164954 only ran 100 tests (`test_variant_consistency_jit_linalg_lu_cuda_float32` + `test_variant_consistency_jit_linalg_lu_factor_cuda_complex64`) x50. ``` Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'test_ops_jit.py', '--shard-id=1', '--num-shards=2', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '--sc=test_ops_jit_1', '--flake-finder', '--flake-runs=50', '--import-slow-tests', '--import-disabled-tests', '--rerun-disabled-tests'] ... [2023-05-25 21:32:49.763856] Expand the folded group to see the log file of test_ops_jit 2/2 ##[group]PRINTING LOG FILE of test_ops_jit 2/2 (/var/lib/jenkins/workspace/test/test-reports/test_ops_jit_h2wr_t2c.log) Test results will be stored in test-reports/python-pytest/test_ops_jit/test_ops_jit-51a83bd44549074e.xml ============================= test session starts ============================== platform linux -- Python 3.10.11, pytest-7.3.1, pluggy-1.0.0 -- /opt/conda/envs/py_3.10/bin/python cachedir: .pytest_cache hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow] rootdir: /var/lib/jenkins/workspace configfile: pytest.ini plugins: hypothesis-5.35.1, cpp-2.3.0, flakefinder-1.1.0, rerunfailures-11.1.2, shard-0.1.2, xdist-3.3.0, xdoctest-1.1.0 collecting ... collected 1084 items Running 100 items in this shard: test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_linalg_lu_cuda_float32 (x50), test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_linalg_lu_factor_cuda_complex64 (x50) stepcurrent: Cannot find last run test, not skipping test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_linalg_lu_cuda_float32 PASSED [2.1876s] [ 1%] test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_linalg_lu_factor_cuda_complex64 PASSED [4.5615s] [ 2%] ``` * [pull](https://github.com/pytorch/pytorch/actions/runs/5093566864) * [trunk](https://github.com/pytorch/pytorch/actions/runs/5095364311) * [periodic](https://github.com/pytorch/pytorch/actions/runs/5095378850) * [slow](https://github.com/pytorch/pytorch/actions/runs/5095390285) Pull Request resolved: https://github.com/pytorch/pytorch/pull/102107 Approved by: https://github.com/clee2000, https://github.com/malfet
2023-05-27 12:10:32 +00:00
or DISTRIBUTED_TEST_PREFIX in os.getenv("TEST_CONFIG", "")
or DISTRIBUTED_TEST_PREFIX in file
or file in CUSTOM_HANDLERS
or file in RUN_PARALLEL_BLOCKLIST
or file in CI_SERIAL_LIST
or file in JIT_EXECUTOR_TESTS
[ONNX] Run ONNX tests as part of standard run_test script (#99215) <!-- copilot:all --> ### <samp>🤖 Generated by Copilot at dcbf7e2</samp> ### Summary 📝🧹🚩 <!-- 1. 📝 for simplifying the `./scripts/onnx/test.sh` script 2. 🧹 for refactoring the `test/onnx/dynamo/test_exporter_api.py` file 3. 🚩 for adding the `--onnx` flag to `test/run_test.py` and updating the `TESTS` list --> This pull request improves the ONNX testing infrastructure in PyTorch by refactoring the test code, normalizing the scope names, adding a flag to run only the ONNX tests, and simplifying the test script. > _To export PyTorch models to ONNX_ > _We refactored some scripts and contexts_ > _We used `common_utils`_ > _And normalized the scopes_ > _And added a flag to run the tests_ ### Walkthrough * Simplify `./scripts/onnx/test.sh` to use `run_test.py` with `--onnx` flag instead of `pytest` ([link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-0017f5b22ae1329acb0f54af8d9811c9b6180a72dac70d7a5b89d7c23c958198L44-R46)) * Remove `onnx` test from `TESTS` list in `test/run_test.py` ([link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-e72503c9e3e8766e2d1bacf3fad7b88aa166e0e90a7e103e7df99357a35df8d7L127-R127)). Replace with `onnx_caffe2`. * Add `onnx/test_pytorch_onnx_onnxruntime_cuda` and `onnx/test_models` tests to `blocklisted_tests` list in `test/run_test.py` ([link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-e72503c9e3e8766e2d1bacf3fad7b88aa166e0e90a7e103e7df99357a35df8d7R154-R155)) * Add `ONNX_SERIAL_LIST` list to `test/run_test.py` to specify ONNX tests that must run serially ([link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-e72503c9e3e8766e2d1bacf3fad7b88aa166e0e90a7e103e7df99357a35df8d7R296-R301)) * Add `ONNX_TESTS` list to `test/run_test.py` to store all ONNX tests ([link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-e72503c9e3e8766e2d1bacf3fad7b88aa166e0e90a7e103e7df99357a35df8d7R370)) * Add `--onnx` flag to `parse_args` function in `test/run_test.py` to run only ONNX tests ([link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-e72503c9e3e8766e2d1bacf3fad7b88aa166e0e90a7e103e7df99357a35df8d7R920-R928)) * Include `ONNX_SERIAL_LIST` in `must_serial` function in `test/run_test.py` to run ONNX tests serially or parallelly based on memory usage ([link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-e72503c9e3e8766e2d1bacf3fad7b88aa166e0e90a7e103e7df99357a35df8d7R1120)) * Filter selected tests based on `--onnx` flag in `get_selected_tests` function in `test/run_test.py` to exclude non-ONNX tests ([link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-e72503c9e3e8766e2d1bacf3fad7b88aa166e0e90a7e103e7df99357a35df8d7R1158-R1165)) ### Other minor changes to accommodate this change * Replace `unittest` module with `common_utils.TestCase` in `test/onnx/dynamo/test_exporter_api.py` ([link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-4545f0c15c73ebe90a875e9bee6c5ca4b6b92fb1ed0ec5560d1568e0f6339d02L4), [link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-4545f0c15c73ebe90a875e9bee6c5ca4b6b92fb1ed0ec5560d1568e0f6339d02L29-R28), [link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-4545f0c15c73ebe90a875e9bee6c5ca4b6b92fb1ed0ec5560d1568e0f6339d02L71-R70), [link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-4545f0c15c73ebe90a875e9bee6c5ca4b6b92fb1ed0ec5560d1568e0f6339d02L147-R146)) * Import `TemporaryFileName` class from `common_utils` in `test/onnx/dynamo/test_exporter_api.py` ([link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-4545f0c15c73ebe90a875e9bee6c5ca4b6b92fb1ed0ec5560d1568e0f6339d02L19-R18)) * Use `common_utils.TemporaryFileName` instead of `TemporaryFileName` in `TestDynamoExportAPI` class in `test/onnx/dynamo/test_exporter_api.py` ([link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-4545f0c15c73ebe90a875e9bee6c5ca4b6b92fb1ed0ec5560d1568e0f6339d02L92-R91), [link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-4545f0c15c73ebe90a875e9bee6c5ca4b6b92fb1ed0ec5560d1568e0f6339d02L110-R109), [link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-4545f0c15c73ebe90a875e9bee6c5ca4b6b92fb1ed0ec5560d1568e0f6339d02L129-R128)) * Use `common_utils.run_tests` instead of `unittest.main` in `test/onnx/dynamo/test_exporter_api.py` ([link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-4545f0c15c73ebe90a875e9bee6c5ca4b6b92fb1ed0ec5560d1568e0f6339d02L155-R154)) * Add `re` module to `test/onnx/test_utility_funs.py` ([link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-da71d2c81c9dc7ac0c47ff086fded82e4edcb67ba0cd3d8b5c983d7467343bc7R6)) * Add `_remove_test_environment_prefix_from_scope_name` function to `test/onnx/test_utility_funs.py` to normalize scope names of ONNX nodes ([link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-da71d2c81c9dc7ac0c47ff086fded82e4edcb67ba0cd3d8b5c983d7467343bc7R32-R58)) * Use `_remove_test_environment_prefix_from_scope_name` function to compare scope names of ONNX nodes in `TestUtilityFuns` class in `test/onnx/test_utility_funs.py` ([link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-da71d2c81c9dc7ac0c47ff086fded82e4edcb67ba0cd3d8b5c983d7467343bc7L1099-R1133), [link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-da71d2c81c9dc7ac0c47ff086fded82e4edcb67ba0cd3d8b5c983d7467343bc7L1119-R1152), [link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-da71d2c81c9dc7ac0c47ff086fded82e4edcb67ba0cd3d8b5c983d7467343bc7L1170-R1188), [link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-da71d2c81c9dc7ac0c47ff086fded82e4edcb67ba0cd3d8b5c983d7467343bc7L1181-R1199), [link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-da71d2c81c9dc7ac0c47ff086fded82e4edcb67ba0cd3d8b5c983d7467343bc7L1220-R1239), [link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-da71d2c81c9dc7ac0c47ff086fded82e4edcb67ba0cd3d8b5c983d7467343bc7L1235-R1258)) Fixes #98626 Pull Request resolved: https://github.com/pytorch/pytorch/pull/99215 Approved by: https://github.com/huydhn, https://github.com/titaiwangms
2023-04-18 00:49:08 +00:00
or file in ONNX_SERIAL_LIST
or NUM_PROCS == 1
)
Add env PYTORCH_TEST_DO_NOT_USE_PYTEST as an option to not use pytest in unit testing (#96444) Set environment variable ``` PYTORCH_TEST_DO_NOT_USE_PYTEST=1 ``` to not use pytest in pytorch unit testing. This change is related to some recent changes, e.g. #96210, #96016, #95844, #95659, that enabled the use of pytest in many test modules. Those test modules were testing normally before, but failed immediately after pytest is used. Sample stacktraces are: ```python root@8e3168a83ee2:/opt/pytorch/pytorch# python test/run_test.py -v -i test_optim -- -v --save-xml Ignoring disabled issues: [] /opt/pytorch/pytorch/test/run_test.py:1225: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead. if torch.version.cuda is not None and LooseVersion(torch.version.cuda) >= "11.6": Selected tests: test_optim parallel (file granularity) tests: test_optim serial (file granularity) tests: Ignoring disabled issues: [] Ignoring disabled issues: [] Running test_optim ... [2023-03-09 12:51:59.358110] Executing ['/usr/local/bin/python', '-bb', 'test_optim.py', '-v', '--save-xml', '-v', '--use-pytest', '-vv', '-rfEX', '-x', '--reruns=2'] ... [2023-03-09 12:51:59.358810] Test results will be stored in test-reports/python-pytest/test_optim/test_optim-5e41643c8bac8ace.xml Traceback (most recent call last): File "/opt/pytorch/pytorch/test/test_optim.py", line 4581, in <module> run_tests() File "/opt/pytorch/pytorch/torch/testing/_internal/common_utils.py", line 796, in run_tests exit_code = pytest.main(args=pytest_args) File "/usr/local/lib/python3.10/site-packages/_pytest/config/__init__.py", line 148, in main config = _prepareconfig(args, plugins) File "/usr/local/lib/python3.10/site-packages/_pytest/config/__init__.py", line 329, in _prepareconfig config = pluginmanager.hook.pytest_cmdline_parse( File "/usr/local/lib/python3.10/site-packages/pluggy/_hooks.py", line 265, in __call__ return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult) File "/usr/local/lib/python3.10/site-packages/pluggy/_manager.py", line 80, in _hookexec return self._inner_hookexec(hook_name, methods, kwargs, firstresult) File "/usr/local/lib/python3.10/site-packages/pluggy/_callers.py", line 55, in _multicall gen.send(outcome) File "/usr/local/lib/python3.10/site-packages/_pytest/helpconfig.py", line 103, in pytest_cmdline_parse config: Config = outcome.get_result() File "/usr/local/lib/python3.10/site-packages/pluggy/_result.py", line 60, in get_result raise ex[1].with_traceback(ex[2]) File "/usr/local/lib/python3.10/site-packages/pluggy/_callers.py", line 39, in _multicall res = hook_impl.function(*args) File "/usr/local/lib/python3.10/site-packages/_pytest/config/__init__.py", line 1060, in pytest_cmdline_parse self.parse(args) File "/usr/local/lib/python3.10/site-packages/_pytest/config/__init__.py", line 1348, in parse self._preparse(args, addopts=addopts) File "/usr/local/lib/python3.10/site-packages/_pytest/config/__init__.py", line 1231, in _preparse self.pluginmanager.load_setuptools_entrypoints("pytest11") File "/usr/local/lib/python3.10/site-packages/pluggy/_manager.py", line 287, in load_setuptools_entrypoints plugin = ep.load() File "/usr/local/lib/python3.10/importlib/metadata/__init__.py", line 171, in load module = import_module(match.group('module')) File "/usr/local/lib/python3.10/importlib/__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1050, in _gcd_import File "<frozen importlib._bootstrap>", line 1027, in _find_and_load File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 688, in _load_unlocked File "/usr/local/lib/python3.10/site-packages/_pytest/assertion/rewrite.py", line 168, in exec_module exec(co, module.__dict__) File "/usr/local/lib/python3.10/site-packages/xdist/looponfail.py", line 16, in <module> import execnet File "/usr/local/lib/python3.10/site-packages/execnet/__init__.py", line 14, in <module> from .gateway_base import DataFormatError File "/usr/local/lib/python3.10/site-packages/execnet/gateway_base.py", line 1138, in <module> FLOAT_FORMAT_SIZE = struct.calcsize(FLOAT_FORMAT) BytesWarning: Comparison between bytes and string FINISHED PRINTING LOG FILE of test_optim (/opt/pytorch/pytorch/test/test-reports/test_optim_1pnlesrz.log) test_optim failed! Traceback (most recent call last): File "/opt/pytorch/pytorch/test/run_test.py", line 1428, in <module> main() File "/opt/pytorch/pytorch/test/run_test.py", line 1386, in main raise RuntimeError( RuntimeError: test_optim failed! Tip: You can keep running tests even on failure by passing --keep-going to run_test.py. If running on CI, add the 'keep-going' label to your PR and rerun your jobs. ``` I'd like to propose this option that allows users to use the good old python unit test way instead of pytest to run their testing in CI. Pull Request resolved: https://github.com/pytorch/pytorch/pull/96444 Approved by: https://github.com/malfet
2023-03-10 01:32:11 +00:00
def can_run_in_pytest(test):
return os.getenv("PYTORCH_TEST_DO_NOT_USE_PYTEST", "0") == "0"
Add env PYTORCH_TEST_DO_NOT_USE_PYTEST as an option to not use pytest in unit testing (#96444) Set environment variable ``` PYTORCH_TEST_DO_NOT_USE_PYTEST=1 ``` to not use pytest in pytorch unit testing. This change is related to some recent changes, e.g. #96210, #96016, #95844, #95659, that enabled the use of pytest in many test modules. Those test modules were testing normally before, but failed immediately after pytest is used. Sample stacktraces are: ```python root@8e3168a83ee2:/opt/pytorch/pytorch# python test/run_test.py -v -i test_optim -- -v --save-xml Ignoring disabled issues: [] /opt/pytorch/pytorch/test/run_test.py:1225: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead. if torch.version.cuda is not None and LooseVersion(torch.version.cuda) >= "11.6": Selected tests: test_optim parallel (file granularity) tests: test_optim serial (file granularity) tests: Ignoring disabled issues: [] Ignoring disabled issues: [] Running test_optim ... [2023-03-09 12:51:59.358110] Executing ['/usr/local/bin/python', '-bb', 'test_optim.py', '-v', '--save-xml', '-v', '--use-pytest', '-vv', '-rfEX', '-x', '--reruns=2'] ... [2023-03-09 12:51:59.358810] Test results will be stored in test-reports/python-pytest/test_optim/test_optim-5e41643c8bac8ace.xml Traceback (most recent call last): File "/opt/pytorch/pytorch/test/test_optim.py", line 4581, in <module> run_tests() File "/opt/pytorch/pytorch/torch/testing/_internal/common_utils.py", line 796, in run_tests exit_code = pytest.main(args=pytest_args) File "/usr/local/lib/python3.10/site-packages/_pytest/config/__init__.py", line 148, in main config = _prepareconfig(args, plugins) File "/usr/local/lib/python3.10/site-packages/_pytest/config/__init__.py", line 329, in _prepareconfig config = pluginmanager.hook.pytest_cmdline_parse( File "/usr/local/lib/python3.10/site-packages/pluggy/_hooks.py", line 265, in __call__ return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult) File "/usr/local/lib/python3.10/site-packages/pluggy/_manager.py", line 80, in _hookexec return self._inner_hookexec(hook_name, methods, kwargs, firstresult) File "/usr/local/lib/python3.10/site-packages/pluggy/_callers.py", line 55, in _multicall gen.send(outcome) File "/usr/local/lib/python3.10/site-packages/_pytest/helpconfig.py", line 103, in pytest_cmdline_parse config: Config = outcome.get_result() File "/usr/local/lib/python3.10/site-packages/pluggy/_result.py", line 60, in get_result raise ex[1].with_traceback(ex[2]) File "/usr/local/lib/python3.10/site-packages/pluggy/_callers.py", line 39, in _multicall res = hook_impl.function(*args) File "/usr/local/lib/python3.10/site-packages/_pytest/config/__init__.py", line 1060, in pytest_cmdline_parse self.parse(args) File "/usr/local/lib/python3.10/site-packages/_pytest/config/__init__.py", line 1348, in parse self._preparse(args, addopts=addopts) File "/usr/local/lib/python3.10/site-packages/_pytest/config/__init__.py", line 1231, in _preparse self.pluginmanager.load_setuptools_entrypoints("pytest11") File "/usr/local/lib/python3.10/site-packages/pluggy/_manager.py", line 287, in load_setuptools_entrypoints plugin = ep.load() File "/usr/local/lib/python3.10/importlib/metadata/__init__.py", line 171, in load module = import_module(match.group('module')) File "/usr/local/lib/python3.10/importlib/__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1050, in _gcd_import File "<frozen importlib._bootstrap>", line 1027, in _find_and_load File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 688, in _load_unlocked File "/usr/local/lib/python3.10/site-packages/_pytest/assertion/rewrite.py", line 168, in exec_module exec(co, module.__dict__) File "/usr/local/lib/python3.10/site-packages/xdist/looponfail.py", line 16, in <module> import execnet File "/usr/local/lib/python3.10/site-packages/execnet/__init__.py", line 14, in <module> from .gateway_base import DataFormatError File "/usr/local/lib/python3.10/site-packages/execnet/gateway_base.py", line 1138, in <module> FLOAT_FORMAT_SIZE = struct.calcsize(FLOAT_FORMAT) BytesWarning: Comparison between bytes and string FINISHED PRINTING LOG FILE of test_optim (/opt/pytorch/pytorch/test/test-reports/test_optim_1pnlesrz.log) test_optim failed! Traceback (most recent call last): File "/opt/pytorch/pytorch/test/run_test.py", line 1428, in <module> main() File "/opt/pytorch/pytorch/test/run_test.py", line 1386, in main raise RuntimeError( RuntimeError: test_optim failed! Tip: You can keep running tests even on failure by passing --keep-going to run_test.py. If running on CI, add the 'keep-going' label to your PR and rerun your jobs. ``` I'd like to propose this option that allows users to use the good old python unit test way instead of pytest to run their testing in CI. Pull Request resolved: https://github.com/pytorch/pytorch/pull/96444 Approved by: https://github.com/malfet
2023-03-10 01:32:11 +00:00
def get_selected_tests(options) -> list[str]:
2018-03-09 21:02:02 +00:00
selected_tests = options.include
# for s390x, override defaults
if IS_S390X and selected_tests == TESTS:
selected_tests = S390X_TESTLIST
# filter if there's JIT only and distributed only test options
if options.jit:
selected_tests = list(
filter(lambda test_name: "jit" in test_name, selected_tests)
)
if options.distributed_tests:
selected_tests = list(
filter(lambda test_name: test_name in DISTRIBUTED_TESTS, selected_tests)
)
# Filter to only run core tests when --core option is specified
if options.core:
selected_tests = list(
filter(lambda test_name: test_name in CORE_TEST_LIST, selected_tests)
)
# Filter to only run functorch tests when --functorch option is specified
if options.functorch:
selected_tests = [tname for tname in selected_tests if tname in FUNCTORCH_TESTS]
Discover and run C++ tests with run_test.py (#99559) This depends on [pytest-cpp](https://github.com/pytest-dev/pytest-cpp) to discover and run C++ tests with pytest. C++ tests are built under `${WORKSPACE}/build/bin` directory and copied to the test job under the same path. * To expose them to `run_test`, I choose to use the mock path prefix `cpp`, for example `build/bin/c10_Array_test` would be named as `cpp/c10_Array_test` and the `python test/run_test.py --cpp -i cpp/c10_Array_test` would run the test in the same way as other Python tests. I could copy them from `build/bin` to `test/cpp`, but it will be mixed with the source code and CMake file. So this looks easier * Some executable under `build/bin` are not C++ tests, and they are exclude, for example `build/bin/torch_shm_manager` * C++ tests need to run with pytest directly as python command doesn't understand it * The change is gated by the new `--cpp` argument to `run_test.py`, for example `python test/run_test.py --cpp` will run all available C++ tests * The tests can be run in parallel * Failing tests can be retried with `--reruns=2` and `--sw` ``` ============================= test session starts ============================== platform darwin -- Python 3.9.15, pytest-7.2.0, pluggy-1.0.0 -- /Users/huydo/miniconda3/envs/py3.9/bin/python3 cachedir: .pytest_cache hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/Users/huydo/Storage/mine/pytorch/test/.hypothesis/examples') rootdir: /Users/huydo/Storage/mine/pytorch, configfile: pytest.ini plugins: xdoctest-1.1.0, cpp-2.3.0, rerunfailures-10.3, shard-0.1.2, flakefinder-1.1.0, hypothesis-6.56.4, xdist-3.0.2, repeat-0.9.1 collecting ... collected 3 items / 2 deselected / 1 selected Running 1 items in this shard: build/bin/scalar_tensor_test::TestScalarTensor.TestScalarTensorMPS stepwise: skipping 2 already passed items. ../build/bin/scalar_tensor_test::TestScalarTensor::TestScalarTensorMPS RERUN [100%] ../build/bin/scalar_tensor_test::TestScalarTensor::TestScalarTensorMPS RERUN [100%] ../build/bin/scalar_tensor_test::TestScalarTensor::TestScalarTensorMPS FAILED [100%] ``` * `--import-slow-tests` and `--import-disabled-tests` won't work for now and that's ok to have it as a future task. I also add `pytest-cpp==2.3.0` to Linux Docker, MacOS, and Windows. ### Testing Build PyTorch and run `python test/run_test.py --cpp` on my laptop. CI change would come later in a separate PR. Also running `python test/run_test.py --help` now shows all C++ test discovered under `build/bin` Pull Request resolved: https://github.com/pytorch/pytorch/pull/99559 Approved by: https://github.com/clee2000
2023-04-22 00:23:31 +00:00
if options.cpp:
selected_tests = [tname for tname in selected_tests if tname in CPP_TESTS]
else:
# Exclude all C++ tests otherwise as they are still handled differently
# than Python test at the moment
options.exclude.extend(CPP_TESTS)
if options.mps:
selected_tests = [
"test_mps",
"test_metal",
"test_modules",
"nn/test_convolution",
"nn/test_dropout",
"nn/test_pooling",
"test_view_ops",
"test_nn",
"inductor/test_mps_basic",
]
else:
# Exclude all mps tests otherwise
options.exclude.extend(["test_mps", "test_metal"])
if options.xpu:
selected_tests = exclude_tests(XPU_BLOCKLIST, selected_tests, "on XPU")
else:
# Exclude all xpu specifc tests otherwise
options.exclude.extend(XPU_TEST)
[ONNX] Run ONNX tests as part of standard run_test script (#99215) <!-- copilot:all --> ### <samp>🤖 Generated by Copilot at dcbf7e2</samp> ### Summary 📝🧹🚩 <!-- 1. 📝 for simplifying the `./scripts/onnx/test.sh` script 2. 🧹 for refactoring the `test/onnx/dynamo/test_exporter_api.py` file 3. 🚩 for adding the `--onnx` flag to `test/run_test.py` and updating the `TESTS` list --> This pull request improves the ONNX testing infrastructure in PyTorch by refactoring the test code, normalizing the scope names, adding a flag to run only the ONNX tests, and simplifying the test script. > _To export PyTorch models to ONNX_ > _We refactored some scripts and contexts_ > _We used `common_utils`_ > _And normalized the scopes_ > _And added a flag to run the tests_ ### Walkthrough * Simplify `./scripts/onnx/test.sh` to use `run_test.py` with `--onnx` flag instead of `pytest` ([link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-0017f5b22ae1329acb0f54af8d9811c9b6180a72dac70d7a5b89d7c23c958198L44-R46)) * Remove `onnx` test from `TESTS` list in `test/run_test.py` ([link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-e72503c9e3e8766e2d1bacf3fad7b88aa166e0e90a7e103e7df99357a35df8d7L127-R127)). Replace with `onnx_caffe2`. * Add `onnx/test_pytorch_onnx_onnxruntime_cuda` and `onnx/test_models` tests to `blocklisted_tests` list in `test/run_test.py` ([link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-e72503c9e3e8766e2d1bacf3fad7b88aa166e0e90a7e103e7df99357a35df8d7R154-R155)) * Add `ONNX_SERIAL_LIST` list to `test/run_test.py` to specify ONNX tests that must run serially ([link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-e72503c9e3e8766e2d1bacf3fad7b88aa166e0e90a7e103e7df99357a35df8d7R296-R301)) * Add `ONNX_TESTS` list to `test/run_test.py` to store all ONNX tests ([link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-e72503c9e3e8766e2d1bacf3fad7b88aa166e0e90a7e103e7df99357a35df8d7R370)) * Add `--onnx` flag to `parse_args` function in `test/run_test.py` to run only ONNX tests ([link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-e72503c9e3e8766e2d1bacf3fad7b88aa166e0e90a7e103e7df99357a35df8d7R920-R928)) * Include `ONNX_SERIAL_LIST` in `must_serial` function in `test/run_test.py` to run ONNX tests serially or parallelly based on memory usage ([link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-e72503c9e3e8766e2d1bacf3fad7b88aa166e0e90a7e103e7df99357a35df8d7R1120)) * Filter selected tests based on `--onnx` flag in `get_selected_tests` function in `test/run_test.py` to exclude non-ONNX tests ([link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-e72503c9e3e8766e2d1bacf3fad7b88aa166e0e90a7e103e7df99357a35df8d7R1158-R1165)) ### Other minor changes to accommodate this change * Replace `unittest` module with `common_utils.TestCase` in `test/onnx/dynamo/test_exporter_api.py` ([link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-4545f0c15c73ebe90a875e9bee6c5ca4b6b92fb1ed0ec5560d1568e0f6339d02L4), [link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-4545f0c15c73ebe90a875e9bee6c5ca4b6b92fb1ed0ec5560d1568e0f6339d02L29-R28), [link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-4545f0c15c73ebe90a875e9bee6c5ca4b6b92fb1ed0ec5560d1568e0f6339d02L71-R70), [link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-4545f0c15c73ebe90a875e9bee6c5ca4b6b92fb1ed0ec5560d1568e0f6339d02L147-R146)) * Import `TemporaryFileName` class from `common_utils` in `test/onnx/dynamo/test_exporter_api.py` ([link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-4545f0c15c73ebe90a875e9bee6c5ca4b6b92fb1ed0ec5560d1568e0f6339d02L19-R18)) * Use `common_utils.TemporaryFileName` instead of `TemporaryFileName` in `TestDynamoExportAPI` class in `test/onnx/dynamo/test_exporter_api.py` ([link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-4545f0c15c73ebe90a875e9bee6c5ca4b6b92fb1ed0ec5560d1568e0f6339d02L92-R91), [link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-4545f0c15c73ebe90a875e9bee6c5ca4b6b92fb1ed0ec5560d1568e0f6339d02L110-R109), [link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-4545f0c15c73ebe90a875e9bee6c5ca4b6b92fb1ed0ec5560d1568e0f6339d02L129-R128)) * Use `common_utils.run_tests` instead of `unittest.main` in `test/onnx/dynamo/test_exporter_api.py` ([link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-4545f0c15c73ebe90a875e9bee6c5ca4b6b92fb1ed0ec5560d1568e0f6339d02L155-R154)) * Add `re` module to `test/onnx/test_utility_funs.py` ([link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-da71d2c81c9dc7ac0c47ff086fded82e4edcb67ba0cd3d8b5c983d7467343bc7R6)) * Add `_remove_test_environment_prefix_from_scope_name` function to `test/onnx/test_utility_funs.py` to normalize scope names of ONNX nodes ([link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-da71d2c81c9dc7ac0c47ff086fded82e4edcb67ba0cd3d8b5c983d7467343bc7R32-R58)) * Use `_remove_test_environment_prefix_from_scope_name` function to compare scope names of ONNX nodes in `TestUtilityFuns` class in `test/onnx/test_utility_funs.py` ([link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-da71d2c81c9dc7ac0c47ff086fded82e4edcb67ba0cd3d8b5c983d7467343bc7L1099-R1133), [link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-da71d2c81c9dc7ac0c47ff086fded82e4edcb67ba0cd3d8b5c983d7467343bc7L1119-R1152), [link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-da71d2c81c9dc7ac0c47ff086fded82e4edcb67ba0cd3d8b5c983d7467343bc7L1170-R1188), [link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-da71d2c81c9dc7ac0c47ff086fded82e4edcb67ba0cd3d8b5c983d7467343bc7L1181-R1199), [link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-da71d2c81c9dc7ac0c47ff086fded82e4edcb67ba0cd3d8b5c983d7467343bc7L1220-R1239), [link](https://github.com/pytorch/pytorch/pull/99215/files?diff=unified&w=0#diff-da71d2c81c9dc7ac0c47ff086fded82e4edcb67ba0cd3d8b5c983d7467343bc7L1235-R1258)) Fixes #98626 Pull Request resolved: https://github.com/pytorch/pytorch/pull/99215 Approved by: https://github.com/huydhn, https://github.com/titaiwangms
2023-04-18 00:49:08 +00:00
# Filter to only run onnx tests when --onnx option is specified
onnx_tests = [tname for tname in selected_tests if tname in ONNX_TESTS]
if options.onnx:
selected_tests = onnx_tests
else:
# Exclude all onnx tests otherwise
options.exclude.extend(onnx_tests)
# process exclusion
if options.exclude_jit_executor:
options.exclude.extend(JIT_EXECUTOR_TESTS)
if options.exclude_distributed_tests:
options.exclude.extend(DISTRIBUTED_TESTS)
if options.exclude_inductor_tests:
options.exclude.extend(INDUCTOR_TESTS)
if options.exclude_torch_export_tests:
options.exclude.extend(TORCH_EXPORT_TESTS)
if options.exclude_aot_dispatch_tests:
options.exclude.extend(AOT_DISPATCH_TESTS)
# these tests failing in CUDA 11.6 temporary disabling. issue https://github.com/pytorch/pytorch/issues/75375
if torch.version.cuda is not None:
options.exclude.extend(["distributions/test_constraints"])
# these tests failing in Python 3.12 temporarily disabling
if sys.version_info >= (3, 12):
options.exclude.extend(
[
"functorch/test_dims",
"functorch/test_rearrange",
"functorch/test_parsing",
"functorch/test_memory_efficient_fusion",
"torch_np/numpy_tests/core/test_multiarray",
]
)
selected_tests = exclude_tests(options.exclude, selected_tests)
if sys.platform == "win32" and not options.ignore_win_blocklist:
target_arch = os.environ.get("VSCMD_ARG_TGT_ARCH")
if target_arch != "x64":
WINDOWS_BLOCKLIST.append("cpp_extensions_aot_no_ninja")
WINDOWS_BLOCKLIST.append("cpp_extensions_aot_ninja")
WINDOWS_BLOCKLIST.append("cpp_extensions_jit")
WINDOWS_BLOCKLIST.append("jit")
WINDOWS_BLOCKLIST.append("jit_fuser")
selected_tests = exclude_tests(WINDOWS_BLOCKLIST, selected_tests, "on Windows")
2018-03-09 21:02:02 +00:00
elif TEST_WITH_ROCM:
selected_tests = exclude_tests(ROCM_BLOCKLIST, selected_tests, "on ROCm")
elif IS_S390X:
selected_tests = exclude_tests(
DISTRIBUTED_TESTS,
selected_tests,
"Skip distributed tests on s390x",
)
# skip all distributed tests if distributed package is not available.
if not dist.is_available():
selected_tests = exclude_tests(
DISTRIBUTED_TESTS,
selected_tests,
"PyTorch is built without distributed support.",
)
# skip tests that require LAPACK when it's not available
if not torch._C.has_lapack:
selected_tests = exclude_tests(
TESTS_REQUIRING_LAPACK,
selected_tests,
"PyTorch is built without LAPACK support.",
)
if TEST_WITH_SLOW_GRADCHECK:
selected_tests = exclude_tests(
TESTS_NOT_USING_GRADCHECK,
selected_tests,
"Running in slow gradcheck mode, skipping tests "
"that don't use gradcheck.",
exact_match=True,
)
selected_tests = [parse_test_module(x) for x in selected_tests]
return selected_tests
def load_test_times_from_file(file: str) -> dict[str, Any]:
# Load previous test times to make sharding decisions
path = os.path.join(str(REPO_ROOT), file)
if not os.path.exists(path):
print_to_stderr(
f"::warning:: Failed to find test times file `{path}`. Using round robin sharding."
)
return {}
with open(path) as f:
test_times_file = cast(dict[str, Any], json.load(f))
build_environment = os.environ.get("BUILD_ENVIRONMENT")
test_config = os.environ.get("TEST_CONFIG")
if test_config in test_times_file.get(build_environment, {}):
print_to_stderr("Found test times from artifacts")
return test_times_file[build_environment][test_config]
elif test_config in test_times_file["default"]:
print_to_stderr(
f"::warning:: Gathered no stats from artifacts for {build_environment} build env"
f" and {test_config} test config. Using default build env and {test_config} test config instead."
)
return test_times_file["default"][test_config]
else:
print_to_stderr(
f"::warning:: Gathered no stats from artifacts for build env {build_environment} build env"
f" and {test_config} test config. Using default build env and default test config instead."
)
return test_times_file["default"]["default"]
def load_test_file_times(
file: str = ADDITIONAL_CI_FILES_FOLDER / TEST_TIMES_FILE,
) -> dict[str, float]:
return cast(dict[str, float], load_test_times_from_file(file))
def load_test_class_times(
file: str = ADDITIONAL_CI_FILES_FOLDER / TEST_CLASS_TIMES_FILE,
) -> dict[str, dict[str, float]]:
return cast(dict[str, dict[str, float]], load_test_times_from_file(file))
def get_sharding_opts(options) -> tuple[int, int]:
which_shard, num_shards = 1, 1
if options.shard:
assert len(options.shard) == 2, "Unexpected shard format"
assert min(options.shard) > 0, "Shards must be positive numbers"
which_shard, num_shards = options.shard
assert (
which_shard <= num_shards
), "Selected shard must be less than or equal to total number of shards"
return (which_shard, num_shards)
def do_sharding(
options,
selected_tests: Sequence[TestRun],
test_file_times: dict[str, float],
test_class_times: dict[str, dict[str, float]],
sort_by_time: bool = True,
) -> tuple[float, list[ShardedTest]]:
which_shard, num_shards = get_sharding_opts(options)
# Do sharding
shards = calculate_shards(
num_shards,
selected_tests,
test_file_times,
test_class_times=test_class_times,
must_serial=must_serial,
sort_by_time=sort_by_time,
)
return shards[which_shard - 1]
2018-03-09 21:02:02 +00:00
class TestFailure(NamedTuple):
test: TestRun
message: str
def run_test_module(
test: ShardedTest, test_directory: str, options
) -> Optional[TestFailure]:
try:
maybe_set_hip_visible_devies()
test_name = test.name
# Printing the date here can help diagnose which tests are slow
print_to_stderr(f"Running {str(test)} ... [{datetime.now()}]")
handler = CUSTOM_HANDLERS.get(test_name, run_test)
return_code = handler(test, test_directory, options)
assert isinstance(return_code, int) and not isinstance(
return_code, bool
), f"While running {str(test)} got non integer return code {return_code}"
if return_code == 0:
return None
message = f"{str(test)} failed!"
if return_code < 0:
# subprocess.Popen returns the child process' exit signal as
# return code -N, where N is the signal number.
signal_name = SIGNALS_TO_NAMES_DICT[-return_code]
message += f" Received signal: {signal_name}"
return TestFailure(test.test, message)
except Exception as e:
return TestFailure(test.test, f"{str(test)} failed! {e}")
Re-order tests based on changed files (#56666) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/56666 Addresses some of #56557 by checking for changed files when running tests. This will help deliver signal faster when a failing test is run. It should always be safe to at least try to re-order the tests, so there's no option to turn it off, and any error ends up bailing out of the sorting process. Time saved will change between tests, with more improvement for things that are further down the static list here: https://github.com/pytorch/pytorch/blob/1e9c7ad4cb1869ea3769e1c563c78bce95da5945/test/run_test.py#L32 The results vary from not much improvement ([before: 11m](https://app.circleci.com/pipelines/github/pytorch/pytorch/307580/workflows/6ab3def6-8d63-4f41-9b8d-9c2c50f6266b/jobs/12712819/steps), [after: 10m](https://app.circleci.com/pipelines/github/pytorch/pytorch/307578/workflows/157407b4-f850-431c-b641-d2ac97916a04/jobs/12712802/steps)) to a lot ([before: 75m](https://app.circleci.com/pipelines/github/pytorch/pytorch/307580/workflows/6ab3def6-8d63-4f41-9b8d-9c2c50f6266b/jobs/12712884/steps), [after: 8m](https://app.circleci.com/pipelines/github/pytorch/pytorch/307578/workflows/157407b4-f850-431c-b641-d2ac97916a04/jobs/12712865/steps)), but overall there shouldn't be any regression in test timing. These results are also probably a little confounded since the test sharding will be different after re-ordering. As a follow up we can use the target determination logic to figure out which tests to bring to front based on the actual code instead of just edits to test files Test Plan: Imported from OSS Reviewed By: samestep Differential Revision: D27934076 Pulled By: driazati fbshipit-source-id: 747d09ad732289d7693101803d46e9fa8e6d2f59
2021-04-22 17:25:41 +00:00
def run_tests(
selected_tests: list[ShardedTest],
test_directory: str,
options,
failures: list[TestFailure],
) -> None:
if len(selected_tests) == 0:
return
# parallel = in parallel with other files
# serial = this file on it's own. The file might still be run in parallel with itself (ex test_ops)
selected_tests_parallel = [x for x in selected_tests if not must_serial(x)]
selected_tests_serial = [
x for x in selected_tests if x not in selected_tests_parallel
]
# See Note [ROCm parallel CI testing]
pool = get_context("spawn").Pool(
NUM_PROCS, maxtasksperchild=None if torch.version.hip else 1
)
# NB: This is a hack to make conftest.py and files it depends on available
# on CPP_TESTS_DIR. We should see if the file could be turned into a
# full-fledge ptest plugin instead
conftest_files = [
"conftest.py",
"pytest_shard_custom.py",
]
for conftest_file in conftest_files:
cpp_file = os.path.join(CPP_TESTS_DIR, conftest_file)
if (
options.cpp
and os.path.exists(CPP_TESTS_DIR)
and os.path.isdir(CPP_TESTS_DIR)
and not os.path.exists(cpp_file)
):
shutil.copy(os.path.join(test_directory, conftest_file), cpp_file)
Run C++ tests on CI with run_test.py (#99956) After https://github.com/pytorch/pytorch/pull/99559, we can now run C++ test with `run_test.py`. Although advance features such as `--import-slow-tests` and `--import-disabled-tests` won't work for now, there will still be a gain in reliability and performance as C++ can now be retried and run in parallel. This covers all C++ tests in the CI including aten, libtorch, and Vulkan C++ tests across all platforms Linux, Windows, MacOS. Notes: * To support C++ test discovery, the env variable `CPP_TESTS_DIR` can be set to where the C++ test binaries is located * Support pytest -k argument via run_test as this is used by pytest-cpp to replace `--gtest-filter` * The XML output is in pytest format, but it's ok now because we don't have slow test or flaky test support for C++ test yet * ~~I need to figure out why conftest.py doesn't work when I invoke pytest directly for C++ test, so `--sc` is not available for C++ tests at the moment. Proper pytest plugin like stepwise works fine though. I'll investigate and fix it in a separate PR~~ Found the cause, `conftest.py` is per directory and needs to be in any arbitrary directory that holds C++ test * Two tests `test_api` and `test_tensorexpr` timed out on ASAN, I suspect that ASAN is now used on top of the python executable, which is slower than running native C++ code. IMO, it's ok to run these tests as before on ASAN for now Pull Request resolved: https://github.com/pytorch/pytorch/pull/99956 Approved by: https://github.com/clee2000, https://github.com/ZainRizvi
2023-05-09 21:24:12 +00:00
def handle_error_messages(failure: Optional[TestFailure]):
if failure is None:
return False
failures.append(failure)
print_to_stderr(failure.message)
return True
def parallel_test_completion_callback(failure):
test_failed = handle_error_messages(failure)
if IS_CI and options.upload_artifacts_while_running:
zip_and_upload_artifacts(test_failed)
Do not collect and skip non-disabled tests when rerunning disabled tests (#102107) The console log blows up to much when running in rerun disabled tests mode (x50) https://hud.pytorch.org/pytorch/pytorch/commit/e132f09e8878418fb98a4b76a441a324452354ec. Each log is around 1GB and the whole uncompressed logs is ~50GB. After compression, it will be around 1GB, still too big. The increase comes mainly from the multiple SKIPPED message for non-disabled tests, which is expected due to how SkipTest and pytest-flakyfinder currently work. I update `test/conftest.py` to completely ignore skipped tests when rerunning disabled test instead of collecting then skipping 50 tests each. The benefit of doing is is much more than I originally expect: * Rerun disabled tests jobs now finish in less than half an hour as they should be * Fix OOM runner crash because of too many collected tests * Fix verbosity issue as now only disabled tests are run x50 times. There are only few hundreds of them atm * Fix timed out issue when rerunning disabled distributed and ASAN tests. They are just too slow when running at x50 ### Testing When rerunning disabled tests https://github.com/pytorch/pytorch/actions/runs/5084508614, only disabled tests on the platform are run, for example `test_ops_jit` on https://ossci-raw-job-status.s3.amazonaws.com/log/13770164954 only ran 100 tests (`test_variant_consistency_jit_linalg_lu_cuda_float32` + `test_variant_consistency_jit_linalg_lu_factor_cuda_complex64`) x50. ``` Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'test_ops_jit.py', '--shard-id=1', '--num-shards=2', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '--sc=test_ops_jit_1', '--flake-finder', '--flake-runs=50', '--import-slow-tests', '--import-disabled-tests', '--rerun-disabled-tests'] ... [2023-05-25 21:32:49.763856] Expand the folded group to see the log file of test_ops_jit 2/2 ##[group]PRINTING LOG FILE of test_ops_jit 2/2 (/var/lib/jenkins/workspace/test/test-reports/test_ops_jit_h2wr_t2c.log) Test results will be stored in test-reports/python-pytest/test_ops_jit/test_ops_jit-51a83bd44549074e.xml ============================= test session starts ============================== platform linux -- Python 3.10.11, pytest-7.3.1, pluggy-1.0.0 -- /opt/conda/envs/py_3.10/bin/python cachedir: .pytest_cache hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow] rootdir: /var/lib/jenkins/workspace configfile: pytest.ini plugins: hypothesis-5.35.1, cpp-2.3.0, flakefinder-1.1.0, rerunfailures-11.1.2, shard-0.1.2, xdist-3.3.0, xdoctest-1.1.0 collecting ... collected 1084 items Running 100 items in this shard: test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_linalg_lu_cuda_float32 (x50), test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_linalg_lu_factor_cuda_complex64 (x50) stepcurrent: Cannot find last run test, not skipping test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_linalg_lu_cuda_float32 PASSED [2.1876s] [ 1%] test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_linalg_lu_factor_cuda_complex64 PASSED [4.5615s] [ 2%] ``` * [pull](https://github.com/pytorch/pytorch/actions/runs/5093566864) * [trunk](https://github.com/pytorch/pytorch/actions/runs/5095364311) * [periodic](https://github.com/pytorch/pytorch/actions/runs/5095378850) * [slow](https://github.com/pytorch/pytorch/actions/runs/5095390285) Pull Request resolved: https://github.com/pytorch/pytorch/pull/102107 Approved by: https://github.com/clee2000, https://github.com/malfet
2023-05-27 12:10:32 +00:00
if (
test_failed
and not options.continue_through_error
and not RERUN_DISABLED_TESTS
):
pool.terminate()
keep_going_message = (
"\n\nTip: You can keep running tests even on failure by passing --keep-going to run_test.py.\n"
"If running on CI, add the 'keep-going' label to your PR and rerun your jobs."
)
try:
for test in selected_tests_serial:
options_clone = copy.deepcopy(options)
if can_run_in_pytest(test):
options_clone.pytest = True
failure = run_test_module(test, test_directory, options_clone)
test_failed = handle_error_messages(failure)
if (
test_failed
and not options.continue_through_error
and not RERUN_DISABLED_TESTS
):
raise RuntimeError(failure.message + keep_going_message)
# Run tests marked as serial first
for test in selected_tests_parallel:
options_clone = copy.deepcopy(options)
if can_run_in_pytest(test):
options_clone.pytest = True
options_clone.additional_args.extend(["-m", "serial"])
failure = run_test_module(test, test_directory, options_clone)
test_failed = handle_error_messages(failure)
if (
test_failed
and not options.continue_through_error
and not RERUN_DISABLED_TESTS
):
raise RuntimeError(failure.message + keep_going_message)
os.environ["NUM_PARALLEL_PROCS"] = str(NUM_PROCS)
for test in selected_tests_parallel:
options_clone = copy.deepcopy(options)
Add env PYTORCH_TEST_DO_NOT_USE_PYTEST as an option to not use pytest in unit testing (#96444) Set environment variable ``` PYTORCH_TEST_DO_NOT_USE_PYTEST=1 ``` to not use pytest in pytorch unit testing. This change is related to some recent changes, e.g. #96210, #96016, #95844, #95659, that enabled the use of pytest in many test modules. Those test modules were testing normally before, but failed immediately after pytest is used. Sample stacktraces are: ```python root@8e3168a83ee2:/opt/pytorch/pytorch# python test/run_test.py -v -i test_optim -- -v --save-xml Ignoring disabled issues: [] /opt/pytorch/pytorch/test/run_test.py:1225: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead. if torch.version.cuda is not None and LooseVersion(torch.version.cuda) >= "11.6": Selected tests: test_optim parallel (file granularity) tests: test_optim serial (file granularity) tests: Ignoring disabled issues: [] Ignoring disabled issues: [] Running test_optim ... [2023-03-09 12:51:59.358110] Executing ['/usr/local/bin/python', '-bb', 'test_optim.py', '-v', '--save-xml', '-v', '--use-pytest', '-vv', '-rfEX', '-x', '--reruns=2'] ... [2023-03-09 12:51:59.358810] Test results will be stored in test-reports/python-pytest/test_optim/test_optim-5e41643c8bac8ace.xml Traceback (most recent call last): File "/opt/pytorch/pytorch/test/test_optim.py", line 4581, in <module> run_tests() File "/opt/pytorch/pytorch/torch/testing/_internal/common_utils.py", line 796, in run_tests exit_code = pytest.main(args=pytest_args) File "/usr/local/lib/python3.10/site-packages/_pytest/config/__init__.py", line 148, in main config = _prepareconfig(args, plugins) File "/usr/local/lib/python3.10/site-packages/_pytest/config/__init__.py", line 329, in _prepareconfig config = pluginmanager.hook.pytest_cmdline_parse( File "/usr/local/lib/python3.10/site-packages/pluggy/_hooks.py", line 265, in __call__ return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult) File "/usr/local/lib/python3.10/site-packages/pluggy/_manager.py", line 80, in _hookexec return self._inner_hookexec(hook_name, methods, kwargs, firstresult) File "/usr/local/lib/python3.10/site-packages/pluggy/_callers.py", line 55, in _multicall gen.send(outcome) File "/usr/local/lib/python3.10/site-packages/_pytest/helpconfig.py", line 103, in pytest_cmdline_parse config: Config = outcome.get_result() File "/usr/local/lib/python3.10/site-packages/pluggy/_result.py", line 60, in get_result raise ex[1].with_traceback(ex[2]) File "/usr/local/lib/python3.10/site-packages/pluggy/_callers.py", line 39, in _multicall res = hook_impl.function(*args) File "/usr/local/lib/python3.10/site-packages/_pytest/config/__init__.py", line 1060, in pytest_cmdline_parse self.parse(args) File "/usr/local/lib/python3.10/site-packages/_pytest/config/__init__.py", line 1348, in parse self._preparse(args, addopts=addopts) File "/usr/local/lib/python3.10/site-packages/_pytest/config/__init__.py", line 1231, in _preparse self.pluginmanager.load_setuptools_entrypoints("pytest11") File "/usr/local/lib/python3.10/site-packages/pluggy/_manager.py", line 287, in load_setuptools_entrypoints plugin = ep.load() File "/usr/local/lib/python3.10/importlib/metadata/__init__.py", line 171, in load module = import_module(match.group('module')) File "/usr/local/lib/python3.10/importlib/__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1050, in _gcd_import File "<frozen importlib._bootstrap>", line 1027, in _find_and_load File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 688, in _load_unlocked File "/usr/local/lib/python3.10/site-packages/_pytest/assertion/rewrite.py", line 168, in exec_module exec(co, module.__dict__) File "/usr/local/lib/python3.10/site-packages/xdist/looponfail.py", line 16, in <module> import execnet File "/usr/local/lib/python3.10/site-packages/execnet/__init__.py", line 14, in <module> from .gateway_base import DataFormatError File "/usr/local/lib/python3.10/site-packages/execnet/gateway_base.py", line 1138, in <module> FLOAT_FORMAT_SIZE = struct.calcsize(FLOAT_FORMAT) BytesWarning: Comparison between bytes and string FINISHED PRINTING LOG FILE of test_optim (/opt/pytorch/pytorch/test/test-reports/test_optim_1pnlesrz.log) test_optim failed! Traceback (most recent call last): File "/opt/pytorch/pytorch/test/run_test.py", line 1428, in <module> main() File "/opt/pytorch/pytorch/test/run_test.py", line 1386, in main raise RuntimeError( RuntimeError: test_optim failed! Tip: You can keep running tests even on failure by passing --keep-going to run_test.py. If running on CI, add the 'keep-going' label to your PR and rerun your jobs. ``` I'd like to propose this option that allows users to use the good old python unit test way instead of pytest to run their testing in CI. Pull Request resolved: https://github.com/pytorch/pytorch/pull/96444 Approved by: https://github.com/malfet
2023-03-10 01:32:11 +00:00
if can_run_in_pytest(test):
options_clone.pytest = True
options_clone.additional_args.extend(["-m", "not serial"])
pool.apply_async(
run_test_module,
args=(test, test_directory, options_clone),
callback=parallel_test_completion_callback,
)
pool.close()
pool.join()
del os.environ["NUM_PARALLEL_PROCS"]
finally:
pool.terminate()
pool.join()
return
def check_pip_packages() -> None:
packages = [
"pytest-rerunfailures",
"pytest-flakefinder",
"pytest-xdist",
]
installed_packages = [i.key for i in pkg_resources.working_set]
for package in packages:
if package not in installed_packages:
print_to_stderr(
f"Missing pip dependency: {package}, please run `pip install -r .ci/docker/requirements-ci.txt`"
)
sys.exit(1)
def main():
check_pip_packages()
options = parse_args()
# Include sharding info in all metrics
which_shard, num_shards = get_sharding_opts(options)
add_global_metric("shard", which_shard)
add_global_metric("num_shards", num_shards)
test_directory = str(REPO_ROOT / "test")
selected_tests = get_selected_tests(options)
test_prioritizations = import_results()
if len(test_prioritizations.get_all_tests()) == 0:
options.enable_td = False
test_prioritizations.amend_tests(selected_tests)
os.makedirs(REPO_ROOT / "test" / "test-reports", exist_ok=True)
if options.coverage and not PYTORCH_COLLECT_COVERAGE:
shell(["coverage", "erase"])
if IS_CI:
# downloading test cases configuration to local environment
get_test_case_configs(dirpath=test_directory)
test_file_times_dict = load_test_file_times()
test_class_times_dict = load_test_class_times()
class TestBatch:
"""Defines a set of tests with similar priority that should be run together on the current shard"""
name: str
sharded_tests: list[ShardedTest]
failures: list[TestFailure]
def __init__(
self, name: str, raw_tests: Sequence[TestRun], should_sort_shard: bool
):
self.name = name
self.failures = []
self.time, self.sharded_tests = do_sharding(
options,
raw_tests,
test_file_times_dict,
test_class_times_dict,
sort_by_time=should_sort_shard,
)
def __str__(self):
s = f"Name: {self.name} (est. time: {round(self.time / 60, 2)}min)\n"
serial = [test for test in self.sharded_tests if must_serial(test)]
parallel = [test for test in self.sharded_tests if not must_serial(test)]
s += f" Serial tests ({len(serial)}):\n"
s += "".join(f" {test}\n" for test in serial)
s += f" Parallel tests ({len(parallel)}):\n"
s += "".join(f" {test}\n" for test in parallel)
return s.strip()
percent_to_run = 25 if options.enable_td else 100
print_to_stderr(
f"Running {percent_to_run}% of tests based on TD"
if options.enable_td
else "Running all tests"
)
include, exclude = test_prioritizations.get_top_per_tests(percent_to_run)
test_batch = TestBatch("tests to run", include, False)
test_batch_exclude = TestBatch("excluded", exclude, True)
if IS_CI:
gen_ci_artifact([x.to_json() for x in include], [x.to_json() for x in exclude])
print_to_stderr(f"Running parallel tests on {NUM_PROCS} processes")
print_to_stderr(test_batch)
print_to_stderr(test_batch_exclude)
if options.dry_run:
return
if options.dynamo:
os.environ["PYTORCH_TEST_WITH_DYNAMO"] = "1"
elif options.inductor:
os.environ["PYTORCH_TEST_WITH_INDUCTOR"] = "1"
if not options.no_translation_validation:
os.environ["PYTORCH_TEST_WITH_TV"] = "1"
try:
# Actually run the tests
start_time = time.time()
run_tests(
test_batch.sharded_tests, test_directory, options, test_batch.failures
)
elapsed_time = time.time() - start_time
print_to_stderr(
f"Running test batch '{test_batch.name}' cost {round(elapsed_time, 2)} seconds"
)
finally:
if options.coverage:
from coverage import Coverage
with set_cwd(test_directory):
cov = Coverage()
if PYTORCH_COLLECT_COVERAGE:
cov.load()
cov.combine(strict=False)
cov.save()
if not PYTORCH_COLLECT_COVERAGE:
cov.html_report()
2018-03-09 21:02:02 +00:00
all_failures = test_batch.failures
if IS_CI:
for test, _ in all_failures:
test_stats = test_prioritizations.get_test_stats(test)
print_to_stderr("Emiting td_test_failure_stats_v2")
emit_metric(
"td_test_failure_stats_v2",
{
"selected_tests": selected_tests,
"failure": str(test),
**test_stats,
},
)
gen_additional_test_failures_file(
[test.test_file for test, _ in all_failures]
)
if len(all_failures):
for _, err in all_failures:
print_to_stderr(err)
Do not collect and skip non-disabled tests when rerunning disabled tests (#102107) The console log blows up to much when running in rerun disabled tests mode (x50) https://hud.pytorch.org/pytorch/pytorch/commit/e132f09e8878418fb98a4b76a441a324452354ec. Each log is around 1GB and the whole uncompressed logs is ~50GB. After compression, it will be around 1GB, still too big. The increase comes mainly from the multiple SKIPPED message for non-disabled tests, which is expected due to how SkipTest and pytest-flakyfinder currently work. I update `test/conftest.py` to completely ignore skipped tests when rerunning disabled test instead of collecting then skipping 50 tests each. The benefit of doing is is much more than I originally expect: * Rerun disabled tests jobs now finish in less than half an hour as they should be * Fix OOM runner crash because of too many collected tests * Fix verbosity issue as now only disabled tests are run x50 times. There are only few hundreds of them atm * Fix timed out issue when rerunning disabled distributed and ASAN tests. They are just too slow when running at x50 ### Testing When rerunning disabled tests https://github.com/pytorch/pytorch/actions/runs/5084508614, only disabled tests on the platform are run, for example `test_ops_jit` on https://ossci-raw-job-status.s3.amazonaws.com/log/13770164954 only ran 100 tests (`test_variant_consistency_jit_linalg_lu_cuda_float32` + `test_variant_consistency_jit_linalg_lu_factor_cuda_complex64`) x50. ``` Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'test_ops_jit.py', '--shard-id=1', '--num-shards=2', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '--sc=test_ops_jit_1', '--flake-finder', '--flake-runs=50', '--import-slow-tests', '--import-disabled-tests', '--rerun-disabled-tests'] ... [2023-05-25 21:32:49.763856] Expand the folded group to see the log file of test_ops_jit 2/2 ##[group]PRINTING LOG FILE of test_ops_jit 2/2 (/var/lib/jenkins/workspace/test/test-reports/test_ops_jit_h2wr_t2c.log) Test results will be stored in test-reports/python-pytest/test_ops_jit/test_ops_jit-51a83bd44549074e.xml ============================= test session starts ============================== platform linux -- Python 3.10.11, pytest-7.3.1, pluggy-1.0.0 -- /opt/conda/envs/py_3.10/bin/python cachedir: .pytest_cache hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow] rootdir: /var/lib/jenkins/workspace configfile: pytest.ini plugins: hypothesis-5.35.1, cpp-2.3.0, flakefinder-1.1.0, rerunfailures-11.1.2, shard-0.1.2, xdist-3.3.0, xdoctest-1.1.0 collecting ... collected 1084 items Running 100 items in this shard: test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_linalg_lu_cuda_float32 (x50), test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_linalg_lu_factor_cuda_complex64 (x50) stepcurrent: Cannot find last run test, not skipping test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_linalg_lu_cuda_float32 PASSED [2.1876s] [ 1%] test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_linalg_lu_factor_cuda_complex64 PASSED [4.5615s] [ 2%] ``` * [pull](https://github.com/pytorch/pytorch/actions/runs/5093566864) * [trunk](https://github.com/pytorch/pytorch/actions/runs/5095364311) * [periodic](https://github.com/pytorch/pytorch/actions/runs/5095378850) * [slow](https://github.com/pytorch/pytorch/actions/runs/5095390285) Pull Request resolved: https://github.com/pytorch/pytorch/pull/102107 Approved by: https://github.com/clee2000, https://github.com/malfet
2023-05-27 12:10:32 +00:00
# A disabled test is expected to fail, so there is no need to report a failure here
if not RERUN_DISABLED_TESTS:
sys.exit(1)
2018-03-09 21:02:02 +00:00
if __name__ == "__main__":
2018-03-09 21:02:02 +00:00
main()