mirror of
https://github.com/saymrwulf/pytorch.git
synced 2026-05-14 20:57:59 +00:00
Fix typos in docs (#80602)
I hope it helps. Pull Request resolved: https://github.com/pytorch/pytorch/pull/80602 Approved by: https://github.com/kit1980
This commit is contained in:
parent
372a19d2c6
commit
e7635c06ce
3 changed files with 5 additions and 5 deletions
|
|
@ -92,7 +92,7 @@ CUDA Stream Usage Examples
|
|||
|
||||
// get the default CUDA stream on device 0
|
||||
at::cuda::CUDAStream defaultStream = at::cuda::getDefaultCUDAStream();
|
||||
// set current CUDA stream back to default CUDA stream on devide 0
|
||||
// set current CUDA stream back to default CUDA stream on device 0
|
||||
at::cuda::setCurrentCUDAStream(defaultStream);
|
||||
// sum() on tensor0 uses `defaultStream` as current CUDA stream
|
||||
tensor0.sum();
|
||||
|
|
@ -120,7 +120,7 @@ CUDA Stream Usage Examples
|
|||
.. attention::
|
||||
|
||||
Above code is running on the same CUDA device. `setCurrentCUDAStream` will always set current CUDA stream on current device,
|
||||
but note that `setCurrentCUDASteram` actually set current stream on the device of passed in CUDA stream.
|
||||
but note that `setCurrentCUDAStream` actually set current stream on the device of passed in CUDA stream.
|
||||
|
||||
|
||||
2. Acquiring and setting CUDA streams on multiple devices.
|
||||
|
|
|
|||
|
|
@ -906,7 +906,7 @@ Atoms are the most basic elements of expressions.
|
|||
|
||||
Identifiers
|
||||
"""""""""""
|
||||
The rules that dictate what is a legal identifer in TorchScript are the same as
|
||||
The rules that dictate what is a legal identifier in TorchScript are the same as
|
||||
their `Python counterparts <https://docs.python.org/3/reference/lexical_analysis.html#identifiers>`_.
|
||||
|
||||
Literals
|
||||
|
|
@ -1830,7 +1830,7 @@ Specifically, following APIs are fully supported:
|
|||
|
||||
- ``torch.distributed.rpc.rpc_async()``
|
||||
- ``rpc_async()`` makes a non-blocking RPC call to run a function on a remote worker. RPC messages are sent and received in parallel to execution of Python code.
|
||||
- More deatils about its usage and examples can be found in :meth:`~torch.distributed.rpc.rpc_async`.
|
||||
- More details about its usage and examples can be found in :meth:`~torch.distributed.rpc.rpc_async`.
|
||||
- ``torch.distributed.rpc.remote()``
|
||||
- ``remote.()`` executes a remote call on a worker and gets a Remote Reference ``RRef`` as the return value.
|
||||
- More details about its usage and examples can be found in :meth:`~torch.distributed.rpc.remote`.
|
||||
|
|
|
|||
|
|
@ -15,7 +15,7 @@ same class methods that :class:`torch.TypedStorage` has.
|
|||
|
||||
A :class:`torch.TypedStorage` is a contiguous, one-dimensional array of
|
||||
elements of a particular :class:`torch.dtype`. It can be given any
|
||||
:class:`torch.dtype`, and the internal data will be interpretted appropriately.
|
||||
:class:`torch.dtype`, and the internal data will be interpreted appropriately.
|
||||
:class:`torch.TypedStorage` contains a :class:`torch.UntypedStorage` which
|
||||
holds the data as an untyped array of bytes.
|
||||
|
||||
|
|
|
|||
Loading…
Reference in a new issue