mirror of
https://github.com/saymrwulf/pytorch.git
synced 2026-05-15 21:00:47 +00:00
Fix quantization doc issue (#50187)
Summary: There has a description error in quantization.rst, fixed it. Pull Request resolved: https://github.com/pytorch/pytorch/pull/50187 Reviewed By: mrshenli Differential Revision: D25895294 Pulled By: soumith fbshipit-source-id: c0b2e7ba3fadfc0977ab2d4d4e9ed4f93694cedd
This commit is contained in:
parent
b18eeaa80a
commit
b48ee75507
1 changed files with 1 additions and 1 deletions
|
|
@ -169,7 +169,7 @@ Diagram::
|
|||
linear_weight_fp32
|
||||
|
||||
# dynamically quantized model
|
||||
# linear and conv weights are in int8
|
||||
# linear and LSTM weights are in int8
|
||||
previous_layer_fp32 -- linear_int8_w_fp32_inp -- activation_fp32 -- next_layer_fp32
|
||||
/
|
||||
linear_weight_int8
|
||||
|
|
|
|||
Loading…
Reference in a new issue