From 8763d44bf16a991e61eabab9e5ac85a5c20a250a Mon Sep 17 00:00:00 2001 From: Jing Xu Date: Thu, 13 Jun 2024 21:15:09 +0000 Subject: [PATCH] add xpu to torch.compile (#127279) As support for Intel GPU has been upstreamed, this PR is to add the XPU-related contents to torch.compile doc. Pull Request resolved: https://github.com/pytorch/pytorch/pull/127279 Approved by: https://github.com/dvrogozh, https://github.com/svekars --- docs/source/torch.compiler.rst | 2 +- docs/source/torch.compiler_get_started.rst | 3 ++- 2 files changed, 3 insertions(+), 2 deletions(-) diff --git a/docs/source/torch.compiler.rst b/docs/source/torch.compiler.rst index c861e413d07..c2c457c0b07 100644 --- a/docs/source/torch.compiler.rst +++ b/docs/source/torch.compiler.rst @@ -22,7 +22,7 @@ written in Python and it marks the transition of PyTorch from C++ to Python. * **TorchInductor** is the default ``torch.compile`` deep learning compiler that generates fast code for multiple accelerators and backends. You need to use a backend compiler to make speedups through ``torch.compile`` - possible. For NVIDIA and AMD GPUs, it leverages OpenAI Triton as the key + possible. For NVIDIA, AMD and Intel GPUs, it leverages OpenAI Triton as the key building block. * **AOT Autograd** captures not only the user-level code, but also backpropagation, diff --git a/docs/source/torch.compiler_get_started.rst b/docs/source/torch.compiler_get_started.rst index caec0760acc..8e18ad41154 100644 --- a/docs/source/torch.compiler_get_started.rst +++ b/docs/source/torch.compiler_get_started.rst @@ -15,7 +15,8 @@ understanding of how you can use ``torch.compile`` in your own programs. .. note:: To run this script, you need to have at least one GPU on your machine. If you do not have a GPU, you can remove the ``.to(device="cuda:0")`` code - in the snippet below and it will run on CPU. + in the snippet below and it will run on CPU. You can also set device to + ``xpu:0`` to run on IntelĀ® GPUs. .. code:: python