mirror of
https://github.com/saymrwulf/pytorch.git
synced 2026-05-14 20:57:59 +00:00
add xpu to torch.compile (#127279)
As support for Intel GPU has been upstreamed, this PR is to add the XPU-related contents to torch.compile doc. Pull Request resolved: https://github.com/pytorch/pytorch/pull/127279 Approved by: https://github.com/dvrogozh, https://github.com/svekars
This commit is contained in:
parent
790138fdc7
commit
8763d44bf1
2 changed files with 3 additions and 2 deletions
|
|
@ -22,7 +22,7 @@ written in Python and it marks the transition of PyTorch from C++ to Python.
|
|||
* **TorchInductor** is the default ``torch.compile`` deep learning compiler
|
||||
that generates fast code for multiple accelerators and backends. You
|
||||
need to use a backend compiler to make speedups through ``torch.compile``
|
||||
possible. For NVIDIA and AMD GPUs, it leverages OpenAI Triton as the key
|
||||
possible. For NVIDIA, AMD and Intel GPUs, it leverages OpenAI Triton as the key
|
||||
building block.
|
||||
|
||||
* **AOT Autograd** captures not only the user-level code, but also backpropagation,
|
||||
|
|
|
|||
|
|
@ -15,7 +15,8 @@ understanding of how you can use ``torch.compile`` in your own programs.
|
|||
.. note::
|
||||
To run this script, you need to have at least one GPU on your machine.
|
||||
If you do not have a GPU, you can remove the ``.to(device="cuda:0")`` code
|
||||
in the snippet below and it will run on CPU.
|
||||
in the snippet below and it will run on CPU. You can also set device to
|
||||
``xpu:0`` to run on Intel® GPUs.
|
||||
|
||||
.. code:: python
|
||||
|
||||
|
|
|
|||
Loading…
Reference in a new issue