pytorch/test/quantization/core
Huamin Li fd494dd426 Change wrapped_linear_prepack and wrapped_quantized_linear_prepacked to private by adding _ as prefix (#135401)
Summary: In https://github.com/pytorch/pytorch/pull/134232, we added two new ops wrapped_linear_prepack and wrapped_quantized_linear_prepacked. From the review comments and offline discussion, we are changing them to private by adding `_` as prefix

Differential Revision: D62325142

Pull Request resolved: https://github.com/pytorch/pytorch/pull/135401
Approved by: https://github.com/houseroad
2024-09-08 04:16:24 +00:00
..
experimental Add None return type to init -- tests (#132352) 2024-08-01 15:44:51 +00:00
__init__.py
test_backend_config.py
test_docs.py
test_quantized_functional.py
test_quantized_module.py Fix failures when default is flipped for weights_only (#127627) 2024-08-16 00:22:43 +00:00
test_quantized_op.py Change wrapped_linear_prepack and wrapped_quantized_linear_prepacked to private by adding _ as prefix (#135401) 2024-09-08 04:16:24 +00:00
test_quantized_tensor.py Fix failures when default is flipped for weights_only (#127627) 2024-08-16 00:22:43 +00:00
test_top_level_apis.py
test_utils.py Add None return type to init -- tests (#132352) 2024-08-01 15:44:51 +00:00
test_workflow_module.py Add None return type to init -- tests (#132352) 2024-08-01 15:44:51 +00:00
test_workflow_ops.py Add None return type to init -- tests (#132352) 2024-08-01 15:44:51 +00:00