quant docs: add and clean up GroupNorm (#40343)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40343

Cleans up the quantized GroupNorm docstring and adds it to quantization docs.

Test Plan: * build on Mac OS and inspect

Differential Revision: D22152635

Pulled By: vkuzo

fbshipit-source-id: 5553b841c7a5d77f1467f0c40657db9e5d730a12
This commit is contained in:
Vasiliy Kuznetsov 2020-06-23 08:56:45 -07:00 committed by Facebook GitHub Bot
parent d15fcc7e49
commit 6e3fdd77ca
2 changed files with 12 additions and 1 deletions

View file

@ -355,6 +355,7 @@ Quantized version of standard NN layers.
quantized representation of 6
* :class:`~torch.nn.quantized.Hardswish` — Hardswish
* :class:`~torch.nn.quantized.LayerNorm` — LayerNorm. *Note: performance on ARM is not optimized*.
* :class:`~torch.nn.quantized.GroupNorm` — GroupNorm. *Note: performance on ARM is not optimized*.
``torch.nn.quantized.dynamic``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -793,6 +794,11 @@ LayerNorm
.. autoclass:: LayerNorm
:members:
GroupNorm
~~~~~~~~~~~~~~~
.. autoclass:: GroupNorm
:members:
torch.nn.quantized.dynamic
----------------------------

View file

@ -42,7 +42,12 @@ class LayerNorm(torch.nn.LayerNorm):
return new_mod
class GroupNorm(torch.nn.GroupNorm):
r"""This is the quantized version of `torch.nn.GroupNorm`.
r"""This is the quantized version of :class:`~torch.nn.GroupNorm`.
Additional args:
* **scale** - quantization scale of the output, type: double.
* **zero_point** - quantization zero point of the output, type: long.
"""
__constants__ = ['num_groups', 'num_channels', 'eps', 'affine']