pytorch/test/quantization/core
Vasiliy Kuznetsov cdab1d676c pt2e short term quant: respect qmin/qmax for linear weight (#96232)
Summary:

Makes the `nnqr.Linear` module respect the qmin/qmax attributes of weight observer.  This is to unblock some customer teams who are depending on non-default values of these attributes.

Test plan:

```
python test/test_quantization.py -k TestReferenceQuantizedModule.test_linear_decomposed
```

Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/96232
Approved by: https://github.com/andrewor14
2023-03-10 04:46:20 +00:00
..
experimental Add various uninterpreted bit tensor data types (try 2) (#95860) 2023-03-04 03:35:59 +00:00
__init__.py
test_backend_config.py AO migration: replace torch internal callsites (#94170) 2023-02-07 02:32:23 +00:00
test_docs.py [BE] [3/3] Rewrite super() calls in test (#94592) 2023-02-12 22:20:53 +00:00
test_quantized_functional.py
test_quantized_module.py pt2e short term quant: respect qmin/qmax for linear weight (#96232) 2023-03-10 04:46:20 +00:00
test_quantized_op.py equal_quantized_cpu requires both inputs are quantized tensor (#95875) 2023-03-03 05:33:23 +00:00
test_quantized_tensor.py [quant][pt2e] Add support for dynamic quantization with symmetric quant for input (#94854) 2023-02-28 19:39:31 +00:00
test_top_level_apis.py
test_utils.py AO migration: replace torch internal callsites (#94170) 2023-02-07 02:32:23 +00:00
test_workflow_module.py [BE] [3/3] Rewrite super() calls in test (#94592) 2023-02-12 22:20:53 +00:00
test_workflow_ops.py [BE] [3/3] Rewrite super() calls in test (#94592) 2023-02-12 22:20:53 +00:00