pytorch/tools/test/test_upload_test_stats.py
Catherine Lee 06a0cfc0ea pytest to run test_ops, test_ops_gradients, test_ops_jit in non linux cuda environments (#79898)
This PR uses pytest to run test_ops, test_ops_gradients, and test_ops_jit in parallel in non linux cuda environments to decrease TTS.  I am excluding linux cuda because running in parallel results in errors due to running out of memory

Notes:
* update hypothesis version for compatability with pytest
* use rerun-failures to rerun tests (similar to flaky tests, although these test files generally don't have flaky tests)
  * reruns are denoted by a rerun tag in the xml.  Failed reruns also have the failure tag.  Successes (meaning that the test is flaky) do not have the failure tag.
* see https://docs.google.com/spreadsheets/d/1aO0Rbg3y3ch7ghipt63PG2KNEUppl9a5b18Hmv2CZ4E/edit#gid=602543594 for info on speedup (or slowdown in the case of slow tests)
  * expecting windows tests to decrease by 60 minutes total
* slow test infra is expected to stay the same - verified by running pytest and unittest on the same job and check the number of skipped/run tests
* test reports to s3 changed - add entirely new table to keep track of invoking_file times
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79898
Approved by: https://github.com/malfet, https://github.com/janeyx99
2022-07-19 19:50:57 +00:00

23 lines
698 B
Python

import os
import unittest
from tools.stats.upload_test_stats import get_tests, summarize_test_cases
IN_CI = os.environ.get("CI")
class TestUploadTestStats(unittest.TestCase):
@unittest.skipIf(
IN_CI,
"don't run in CI as this does a lot of network calls and uses up GH API rate limit",
)
def test_existing_job(self) -> None:
"""Run on a known-good job and make sure we don't error and get basically okay reults."""
test_cases, _ = get_tests(2561394934, 1)
self.assertEqual(len(test_cases), 609873)
summary = summarize_test_cases(test_cases)
self.assertEqual(len(summary), 5068)
if __name__ == "__main__":
unittest.main()