pytorch/test/quantization
Howard Huang 5610e8271b Fix skip_if_not_multigpu decorator (#54916)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/54916

Fixes https://github.com/pytorch/pytorch/issues/54887

`skip_if_not_multigpu` was skipping all the tests that use it.

Test Plan: Imported from OSS

Reviewed By: mrshenli

Differential Revision: D27412193

Pulled By: H-Huang

fbshipit-source-id: 28d6697bd8cc6b6784cdb038ccb3ff138d0610eb
2021-04-01 18:01:33 -07:00
..
serialized
__init__.py
test_backward_compatibility.py
test_bias_correction.py Numeric Suite: Swap with shadow modules only for quantized part of model (#51052) 2021-02-04 11:40:30 -08:00
test_equalize.py
test_fusion_passes.py
test_numeric_suite.py Numeric Suite: Swap with shadow modules only for quantized part of model (#51052) 2021-02-04 11:40:30 -08:00
test_numeric_suite_fx.py ns for fx: add weight matching for linear fp16 emulation (#54257) 2021-03-25 22:35:38 -07:00
test_qat_module.py
test_quantize.py quant: fix conv transpose with qconfig == None (#52844) 2021-02-25 11:52:30 -08:00
test_quantize_fx.py Fix skip_if_not_multigpu decorator (#54916) 2021-04-01 18:01:33 -07:00
test_quantize_jit.py [quantization] Add some support for 3d operations (#50003) 2021-03-10 16:40:35 -08:00
test_quantized_functional.py
test_quantized_module.py [quant] Reference option for conv module (#52316) 2021-02-24 14:54:02 -08:00
test_quantized_op.py [quant][fix] MHA tensor assignment fix (#53031) 2021-03-03 14:49:19 -08:00
test_quantized_tensor.py
test_workflow_module.py Fake Quantization support for f16 and f32 (#52612) 2021-02-23 10:49:12 -08:00