Fix typos under docs directory (#88033)

This PR fixes typos in `.rst` and `.Doxyfile` files under docs directory

Pull Request resolved: https://github.com/pytorch/pytorch/pull/88033
Approved by: https://github.com/soulitzer
This commit is contained in:
Kazuaki Ishizaki 2022-10-31 19:31:56 +00:00 committed by PyTorch MergeBot
parent c7ac333430
commit 7d2f1cd211
8 changed files with 8 additions and 8 deletions

View file

@ -1490,7 +1490,7 @@ EXT_LINKS_IN_WINDOW = NO
FORMULA_FONTSIZE = 10
# Use the FORMULA_TRANPARENT tag to determine whether or not the images
# Use the FORMULA_TRANSPARENT tag to determine whether or not the images
# generated for formulas are transparent PNGs. Transparent PNGs are not
# supported properly for IE 6.0, but are supported on all modern browsers.
#

View file

@ -1488,7 +1488,7 @@ EXT_LINKS_IN_WINDOW = NO
FORMULA_FONTSIZE = 10
# Use the FORMULA_TRANPARENT tag to determine whether or not the images
# Use the FORMULA_TRANSPARENT tag to determine whether or not the images
# generated for formulas are transparent PNGs. Transparent PNGs are not
# supported properly for IE 6.0, but are supported on all modern browsers.
#

View file

@ -144,7 +144,7 @@ CUDA Stream Usage Examples
// sum() on tensor0 use `myStream0` as current CUDA stream on device 0
tensor0.sum();
// change the current device index to 1 by using CUDA device guard within a braket scope
// change the current device index to 1 by using CUDA device guard within a bracket scope
{
at::cuda::CUDAGuard device_guard{1};
// create a tensor on device 1

View file

@ -29,7 +29,7 @@ Here is an example of a simple synchronization error in PyTorch:
The ``a`` tensor is initialized on the default stream and, without any synchronization
methods, modified on a new stream. The two kernels will run concurrently on the same tensor,
which might cause the second kernel to read unitialized data before the first one was able
which might cause the second kernel to read uninitialized data before the first one was able
to write it, or the first kernel might overwrite part of the result of the second.
When this script is run on the commandline with:
::

View file

@ -65,7 +65,7 @@ in real time.
See :class:`~torch.utils.data.IterableDataset` for more details.
.. note:: When using an :class:`~torch.utils.data.IterableDataset` with
.. note:: When using a :class:`~torch.utils.data.IterableDataset` with
`multi-process data loading <Multi-process data loading_>`_. The same
dataset object is replicated on each worker process, and thus the
replicas must be configured differently to avoid duplicated data. See

View file

@ -36,7 +36,7 @@ What is an FX transform? Essentially, it's a function that looks like this.
# Step 3: Construct a Module to return
return torch.fx.GraphModule(m, graph)
Your transform will take in an :class:`torch.nn.Module`, acquire a :class:`Graph`
Your transform will take in a :class:`torch.nn.Module`, acquire a :class:`Graph`
from it, do some modifications, and return a new
:class:`torch.nn.Module`. You should think of the :class:`torch.nn.Module` that your FX
transform returns as identical to a regular :class:`torch.nn.Module` -- you can pass it to another

View file

@ -529,7 +529,7 @@ Quantized dtypes and quantization schemes
Note that operator implementations currently only
support per channel quantization for weights of the **conv** and **linear**
operators. Furthermore, the input data is
mapped linearly to the the quantized data and vice versa
mapped linearly to the quantized data and vice versa
as follows:
.. math::

View file

@ -354,7 +354,7 @@ QAT API Example::
# attach a global qconfig, which contains information about what kind
# of observers to attach. Use 'fbgemm' for server inference and
# 'qnnpack' for mobile inference. Other quantization configurations such
# as selecting symmetric or assymetric quantization and MinMax or L2Norm
# as selecting symmetric or asymmetric quantization and MinMax or L2Norm
# calibration techniques can be specified here.
model_fp32.qconfig = torch.quantization.get_default_qat_qconfig('fbgemm')