mirror of
https://github.com/saymrwulf/pytorch.git
synced 2026-05-14 20:57:59 +00:00
# Motivation According to [[1/2] Intel GPU Runtime Upstreaming for Generator](https://github.com/pytorch/pytorch/pull/118528), as mentioned in [[RFC] Intel GPU Runtime Upstreaming](https://github.com/pytorch/pytorch/issues/114842), the second PR covers the changes under `python frontend`. # Design Currently, it primarily offers geneartor-related APIs, including - `torch.xpu.default_generators` - `torch.xpu.get_rng_state` - `torch.xpu.get_rng_state_all` - `torch.xpu.initial_seed` - `torch.xpu.manual_seed` - `torch.xpu.manual_seed_all` - `torch.xpu.seed` - `torch.xpu.seed_all` - `torch.xpu.set_rng_state` - `torch.xpu.set_rng_state_all` # Additional Context The differences with CUDA: The generator-related frontend python APIs are 1:1 mapping with CUDA. Pull Request resolved: https://github.com/pytorch/pytorch/pull/118613 Approved by: https://github.com/gujinghui, https://github.com/EikanWang, https://github.com/jgong5, https://github.com/albanD
57 lines
No EOL
997 B
ReStructuredText
57 lines
No EOL
997 B
ReStructuredText
torch.xpu
|
|
===================================
|
|
.. automodule:: torch.xpu
|
|
.. currentmodule:: torch.xpu
|
|
|
|
.. autosummary::
|
|
:toctree: generated
|
|
:nosignatures:
|
|
|
|
StreamContext
|
|
current_device
|
|
current_stream
|
|
device
|
|
device_count
|
|
device_of
|
|
empty_cache
|
|
get_device_capability
|
|
get_device_name
|
|
get_device_properties
|
|
init
|
|
is_available
|
|
is_initialized
|
|
set_device
|
|
set_stream
|
|
stream
|
|
synchronize
|
|
|
|
Random Number Generator
|
|
-------------------------
|
|
.. autosummary::
|
|
:toctree: generated
|
|
:nosignatures:
|
|
|
|
get_rng_state
|
|
get_rng_state_all
|
|
initial_seed
|
|
manual_seed
|
|
manual_seed_all
|
|
seed
|
|
seed_all
|
|
set_rng_state
|
|
set_rng_state_all
|
|
|
|
Streams and events
|
|
------------------
|
|
.. autosummary::
|
|
:toctree: generated
|
|
:nosignatures:
|
|
|
|
Event
|
|
Stream
|
|
|
|
|
|
.. This module needs to be documented. Adding here in the meantime
|
|
.. for tracking purposes
|
|
.. py:module:: torch.xpu.random
|
|
.. py:module:: torch.xpu.streams |