pytorch/test/quantization/core
Jerry Zhang bfa16a161d Add int1 to int7 dtypes (#136301)
Summary:
Similar to https://github.com/pytorch/pytorch/pull/117208, we want to add int1 to int7 for edge use cases
for weight quantization (https://www.internalfb.com/diff/D62464487)

Test Plan:
python test/test_quantization.py -k test_uint4_int4_dtype

Reviewers:

Subscribers:

Tasks:

Tags:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/136301
Approved by: https://github.com/ezyang
2024-09-28 02:08:33 +00:00
..
experimental
__init__.py
test_backend_config.py
test_docs.py
test_quantized_functional.py
test_quantized_module.py
test_quantized_op.py Change wrapped_linear_prepack and wrapped_quantized_linear_prepacked to private by adding _ as prefix (#135401) 2024-09-08 04:16:24 +00:00
test_quantized_tensor.py
test_top_level_apis.py
test_utils.py Add int1 to int7 dtypes (#136301) 2024-09-28 02:08:33 +00:00
test_workflow_module.py Add uint16 support for observer (#136238) 2024-09-18 23:52:18 +00:00
test_workflow_ops.py