Fix quantization doc issue (#50187)

Summary:
There has a description error in quantization.rst, fixed it.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50187

Reviewed By: mrshenli

Differential Revision: D25895294

Pulled By: soumith

fbshipit-source-id: c0b2e7ba3fadfc0977ab2d4d4e9ed4f93694cedd
This commit is contained in:
Gemfield 2021-02-02 20:23:37 -08:00 committed by Facebook GitHub Bot
parent b18eeaa80a
commit b48ee75507

View file

@ -169,7 +169,7 @@ Diagram::
linear_weight_fp32
# dynamically quantized model
# linear and conv weights are in int8
# linear and LSTM weights are in int8
previous_layer_fp32 -- linear_int8_w_fp32_inp -- activation_fp32 -- next_layer_fp32
/
linear_weight_int8