mirror of
https://github.com/saymrwulf/pytorch.git
synced 2026-05-14 20:57:59 +00:00
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/57703 The .bzl files didn't have registerQuantizedCUDA listed for some reason but upon adding them, the previously broken commands (on CUDA) now work. note: these build files didn't affect OSS builds which was working throughout. the test_qtensor test was potentially misleading since it would pass even if CUDA support wasn't working as long as the build system wasn't CUDA enabled. I broke this out into independent tests for each device so at least a skip would be produced rather than a pass for systems without CUDA enabled. Test Plan: buck test mode/dbg //caffe2/test:quantization -- --exact 'caffe2/test:quantization - test_qtensor_cpu (quantization.test_quantized_tensor.TestQuantizedTensor)' buck test mode/dbg //caffe2/test:quantization -- --exact 'caffe2/test:quantization - test_qtensor_cuda (quantization.test_quantized_tensor.TestQuantizedTensor)' Reviewed By: jerryzh168 Differential Revision: D28242797 fbshipit-source-id: 938ae86dcd605aedf26ac0bace9db77deaaf9c0f |
||
|---|---|---|
| .. | ||
| serialized | ||
| __init__.py | ||
| test_backward_compatibility.py | ||
| test_bias_correction.py | ||
| test_equalize.py | ||
| test_fusion_passes.py | ||
| test_numeric_suite.py | ||
| test_numeric_suite_fx.py | ||
| test_qat_module.py | ||
| test_quantize.py | ||
| test_quantize_fx.py | ||
| test_quantize_jit.py | ||
| test_quantized_functional.py | ||
| test_quantized_module.py | ||
| test_quantized_op.py | ||
| test_quantized_tensor.py | ||
| test_workflow_module.py | ||