pytorch/test/quantization
Salil Desai 8d7242a18b [PyTorch Edge] Add Quantized Softmax Op (Naive Implementation) (#75017)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75017

This version just does dequantize, fp32 softmax, quantize.
Another version of actual quantized softmax using qnnpack will be added next

Test Plan:
From fbcode:
```buck test caffe2/test:quantization -- test_qsoftmax```

Benchmarking: See summary of D34996486

Reviewed By: kimishpatel

Differential Revision: D34943147

fbshipit-source-id: 426a0780803597a21460139c67960891d6e9cc81
(cherry picked from commit 524eede541773299fc015f47c6cd6275ed5cf421)
2022-03-31 19:32:04 +00:00
..
ao_migration [quant] Rename _convert_do_not_use.py to convert.py (#74322) 2022-03-17 18:57:08 +00:00
bc [Quant][fx] Reenable serialization test after convert refactor (#74204) 2022-03-15 03:51:14 +00:00
core [PyTorch Edge] Add Quantized Softmax Op (Naive Implementation) (#75017) 2022-03-31 19:32:04 +00:00
dbr dbr quant: enable reference module support for torch.qint32 (#73493) 2022-03-04 17:35:31 +00:00
eager [quant] fix int16 quantization scale in conv weight (#74665) 2022-03-31 06:10:23 +00:00
fx [AO][bugfix] Fixing FX QAT but for untraceable modules (#74277) 2022-03-30 15:08:45 +00:00
jit
serialized [Quant][fx] Reenable serialization test after convert refactor (#74204) 2022-03-15 03:51:14 +00:00
__init__.py