mirror of
https://github.com/saymrwulf/pytorch.git
synced 2026-05-14 20:57:59 +00:00
Fixes the reason behind moving the tests to unstable initially. (https://github.com/pytorch/pytorch/pull/145790) We ensure gpu isolation for each pod within kubernetes by propagating the drivers selected for the pod from the Kubernetes layer up to the docker run in pytorch here. Now we stick with the GPUs assigned to the pod in the first place and there is no overlap between the test runners. Pull Request resolved: https://github.com/pytorch/pytorch/pull/145829 Approved by: https://github.com/jeffdaily |
||
|---|---|---|
| .. | ||
| build-android | ||
| checkout-pytorch | ||
| chown-workspace | ||
| diskspace-cleanup | ||
| download-build-artifacts | ||
| download-td-artifacts | ||
| filter-test-configs | ||
| get-workflow-job-id | ||
| linux-test | ||
| pytest-cache-download | ||
| pytest-cache-upload | ||
| setup-linux | ||
| setup-rocm | ||
| setup-win | ||
| setup-xpu | ||
| teardown-rocm | ||
| teardown-win | ||
| teardown-xpu | ||
| test-pytorch-binary | ||
| upload-sccache-stats | ||
| upload-test-artifacts | ||