pytorch/test/quantization
Charles David Hernandez 5044d9dc51 Fixing quantize_per_tensor on cuda (#57703)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57703

The .bzl files didn't have registerQuantizedCUDA listed for some reason but upon adding them, the previously broken commands (on CUDA) now work.

note: these build files didn't affect OSS builds which was working throughout.

the test_qtensor test was potentially misleading since it would pass even if CUDA support wasn't working as long as the build system wasn't CUDA enabled. I broke this out into independent tests for each device so at least a skip would be produced rather than a pass for systems without CUDA enabled.

Test Plan:
buck test mode/dbg //caffe2/test:quantization -- --exact 'caffe2/test:quantization - test_qtensor_cpu (quantization.test_quantized_tensor.TestQuantizedTensor)'

buck test mode/dbg //caffe2/test:quantization -- --exact 'caffe2/test:quantization - test_qtensor_cuda (quantization.test_quantized_tensor.TestQuantizedTensor)'

Reviewed By: jerryzh168

Differential Revision: D28242797

fbshipit-source-id: 938ae86dcd605aedf26ac0bace9db77deaaf9c0f
2021-05-07 12:26:19 -07:00
..
serialized
__init__.py
test_backward_compatibility.py
test_bias_correction.py
test_equalize.py
test_fusion_passes.py
test_numeric_suite.py
test_numeric_suite_fx.py ns for fx: clean up manual string names of related ops (#57210) 2021-05-05 06:30:32 -07:00
test_qat_module.py Remove legacy constructor calls from pytorch codebase. (#54142) 2021-04-11 15:45:17 -07:00
test_quantize.py
test_quantize_fx.py [quant][graphmode][fx] Skip observering boolean Tensors (#57375) 2021-05-03 11:20:33 -07:00
test_quantize_jit.py Add padding_idx argument to EmbeddingBag (#49237) 2021-04-14 09:38:01 -07:00
test_quantized_functional.py
test_quantized_module.py [quantization] Fix deepcopy on quantized ConvNd (#56154) 2021-04-15 15:18:22 -07:00
test_quantized_op.py make quantizeable MHA work with torch.jit.script (#57774) 2021-05-07 08:40:49 -07:00
test_quantized_tensor.py Fixing quantize_per_tensor on cuda (#57703) 2021-05-07 12:26:19 -07:00
test_workflow_module.py per_channel fake quant fp16 and fp64 support (#56894) 2021-04-30 13:52:45 -07:00