mirror of
https://github.com/saymrwulf/pytorch.git
synced 2026-05-14 20:57:59 +00:00
* Will enable us to target `periodic`/distributed CI jobs to 4-GPU runners using a different label `linux.rocm.gpu.4` * Use 2-GPU runners for `trunk`, `pull` and `slow` (in addition to `inductor-rocm`) as well (although this currently will not change anything, since all our MI2xx runners have both `linux.rocm.gpu` and `linux.rocm.gpu.2` labels... but this will change in the future: see next point) * Continue to use `linux.rocm.gpu` label for any job that doesn't need more than 1-GPU eg. binary test jobs in `workflows/generated-linux-binary-manywheel-nightly.yml` Pull Request resolved: https://github.com/pytorch/pytorch/pull/143769 Approved by: https://github.com/jeffdaily
61 lines
1.8 KiB
YAML
61 lines
1.8 KiB
YAML
self-hosted-runner:
|
|
labels:
|
|
# GitHub hosted x86 Linux runners
|
|
- linux.20_04.4x
|
|
- linux.20_04.16x
|
|
# Organization-wide AWS Linux Runners
|
|
- linux.large
|
|
- linux.2xlarge
|
|
- linux.4xlarge
|
|
- linux.9xlarge.ephemeral
|
|
- am2.linux.9xlarge.ephemeral
|
|
- linux.12xlarge
|
|
- linux.12xlarge.ephemeral
|
|
- linux.24xlarge
|
|
- linux.24xlarge.ephemeral
|
|
- linux.arm64.2xlarge
|
|
- linux.arm64.2xlarge.ephemeral
|
|
- linux.arm64.m7g.4xlarge
|
|
- linux.arm64.m7g.4xlarge.ephemeral
|
|
- linux.4xlarge.nvidia.gpu
|
|
- linux.8xlarge.nvidia.gpu
|
|
- linux.16xlarge.nvidia.gpu
|
|
- linux.g5.4xlarge.nvidia.gpu
|
|
# Pytorch/pytorch AWS Linux Runners on Linux Foundation account
|
|
- lf.linux.large
|
|
- lf.linux.2xlarge
|
|
- lf.linux.4xlarge
|
|
- lf.linux.12xlarge
|
|
- lf.linux.24xlarge
|
|
- lf.linux.arm64.2xlarge
|
|
- lf.linux.4xlarge.nvidia.gpu
|
|
- lf.linux.8xlarge.nvidia.gpu
|
|
- lf.linux.16xlarge.nvidia.gpu
|
|
- lf.linux.g5.4xlarge.nvidia.gpu
|
|
# Repo-specific IBM hosted S390x runner
|
|
- linux.s390x
|
|
# Organization wide AWS Windows runners
|
|
- windows.g4dn.xlarge
|
|
- windows.g4dn.xlarge.nonephemeral
|
|
- windows.4xlarge
|
|
- windows.4xlarge.nonephemeral
|
|
- windows.8xlarge.nvidia.gpu
|
|
- windows.8xlarge.nvidia.gpu.nonephemeral
|
|
- windows.g5.4xlarge.nvidia.gpu
|
|
# Organization-wide AMD hosted runners
|
|
- linux.rocm.gpu
|
|
- linux.rocm.gpu.2
|
|
- linux.rocm.gpu.4
|
|
# Repo-specific Apple hosted runners
|
|
- macos-m1-ultra
|
|
- macos-m2-14
|
|
# Org wise AWS `mac2.metal` runners (2020 Mac mini hardware powered by Apple silicon M1 processors)
|
|
- macos-m1-stable
|
|
- macos-m1-13
|
|
- macos-m1-14
|
|
# GitHub-hosted MacOS runners
|
|
- macos-latest-xlarge
|
|
- macos-13-xlarge
|
|
- macos-14-xlarge
|
|
# Organization-wide Intel hosted XPU runners
|
|
- linux.idc.xpu
|