pytorch/test/quantization/core
Huamin Li fd494dd426 Change wrapped_linear_prepack and wrapped_quantized_linear_prepacked to private by adding _ as prefix (#135401)
Summary: In https://github.com/pytorch/pytorch/pull/134232, we added two new ops wrapped_linear_prepack and wrapped_quantized_linear_prepacked. From the review comments and offline discussion, we are changing them to private by adding `_` as prefix

Differential Revision: D62325142

Pull Request resolved: https://github.com/pytorch/pytorch/pull/135401
Approved by: https://github.com/houseroad
2024-09-08 04:16:24 +00:00
..
experimental
__init__.py
test_backend_config.py
test_docs.py
test_quantized_functional.py
test_quantized_module.py
test_quantized_op.py Change wrapped_linear_prepack and wrapped_quantized_linear_prepacked to private by adding _ as prefix (#135401) 2024-09-08 04:16:24 +00:00
test_quantized_tensor.py
test_top_level_apis.py
test_utils.py
test_workflow_module.py
test_workflow_ops.py