pytorch/test
Kimish Patel 3e58cba3c5 Fixes the Conv2d batch_norm folding for various cases. (#34932)
Summary:
This PR adds a preprocessing step in Conv2dBatchNorm folding.
It traverses the module to check if the bias of Conv2d module is set to
None. If it is, it assume that this a traced module and insert
Optional[Tensor] type bias.
Furthermore it insert getAttr for bias in the forward graph and fixes
_convolution op to take values from getAttr.
It also fixes parametere extraction from BN module which may not
have weight and bias attributes if affine was set to False. In scripted
mode such a BN module will get weight and bias attributes set to None.
For the case of eps it gets const propagated in tracing. This is also
fixed.
Few tests cases are added.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34932

Test Plan:
python test/test_jit.py TestJit.test_foldbn_trivial
python test/test_jit.py TestJit.test_foldbn_trivial_nobias
python test/test_jit.py TestJit.test_foldbn_in_submodule
python test/test_jit.py TestJit.test_foldbn_shared_classtype
python test/test_jit.py TestJit.test_foldbn_complex_cases
python test/test_jit.py TestJit.test_nofoldbn_complex_cases

Differential Revision: D20536478

Pulled By: kimishpatel

fbshipit-source-id: 4e842976a380d0575a71001bb4481390c08c259e
2020-03-20 20:06:44 -07:00
..
backward_compatibility [JIT] add id function (#34975) 2020-03-20 20:03:10 -07:00
bottleneck_test
cpp [C++ API Parity] Add xor_convergence test for lbfgs (#35001) 2020-03-20 06:57:24 -07:00
cpp_api_parity [C++ API] RNN / GRU / LSTM layer refactoring (#34322) 2020-03-15 17:48:29 -07:00
cpp_extensions
custom_operator [jit] kill script namespace (#34515) 2020-03-11 23:32:48 -07:00
distributed Revert D20541921: [pytorch][PR] [RELAND] Eager autocasting, out-of-place ops only (with MSVC 2017 fix) 2020-03-19 22:39:12 -07:00
error_messages
expect
jit [JIT] Add support for tolist for GPU-resident Tensors (#34554) 2020-03-11 15:14:12 -07:00
mobile [jit] kill script namespace (#34515) 2020-03-11 23:32:48 -07:00
onnx Fix torch.mm export to ONNX (#34661) 2020-03-19 21:59:34 -07:00
optim
scripts
type_hint_tests Support for Tensor Shape Type Hint (#34595) 2020-03-13 15:16:24 -07:00
HowToWriteTestsUsingFileCheck.md
run_test.py Add TensorExpr Fuser tests (resubmit). (#35085) 2020-03-20 13:19:31 -07:00
simulate_nccl_errors.py
te_utils.py [TensorExpr] Pull changes from bertmaher/pytorch_fusion. (#34842) 2020-03-17 11:02:48 -07:00
test_autograd.py functional autograd api (#34066) 2020-03-19 08:24:07 -07:00
test_complex.py
test_cpp_api_parity.py [jit] kill script namespace (#34515) 2020-03-11 23:32:48 -07:00
test_cpp_extensions_aot.py
test_cpp_extensions_jit.py
test_cuda.py Revert D20541921: [pytorch][PR] [RELAND] Eager autocasting, out-of-place ops only (with MSVC 2017 fix) 2020-03-19 22:39:12 -07:00
test_cuda_primary_ctx.py
test_dataloader.py
test_determination.py
test_distributions.py Continuous bernoulli distribution (take 2) (#34619) 2020-03-12 11:53:18 -07:00
test_docs_coverage.py
test_expecttest.py
test_fake_quant.py
test_function_schema.py
test_indexing.py
test_jit.py Fixes the Conv2d batch_norm folding for various cases. (#34932) 2020-03-20 20:06:44 -07:00
test_jit_disabled.py
test_jit_fuser.py Fix warnings in test/test_jit_fuser.py (#34980) 2020-03-18 19:55:25 -07:00
test_jit_fuser_legacy.py
test_jit_fuser_te.py Add TensorExpr Fuser tests (resubmit). (#35085) 2020-03-20 13:19:31 -07:00
test_jit_legacy.py
test_jit_py3.py [jit] Include call stack in OSError message (#34669) 2020-03-18 15:10:23 -07:00
test_jit_simple.py
test_jit_string.py
test_logging.py
test_mkldnn.py
test_multiprocessing.py
test_multiprocessing_spawn.py
test_namedtensor.py Adds true_divide function, analogous to Python 's, JAX's, NumPy's (true) division (#34236) 2020-03-09 21:06:33 -07:00
test_namedtuple_return_api.py
test_nn.py Adds truncated normal initializer (#32397) 2020-03-20 10:29:05 -07:00
test_numba_integration.py
test_optim.py Turn on exact_dtype by default on test_optim.py (#34825) 2020-03-17 14:41:13 -07:00
test_overrides.py Add types argument to __torch_function__ (#34303) 2020-03-17 13:32:00 -07:00
test_qat.py
test_quantization.py [quant][graphmode] Add Finalize function that inlines graph and produce quantized ops (#33927) 2020-03-12 14:52:58 -07:00
test_quantized.py Revert e7fc55e (#35080) 2020-03-19 22:32:32 -07:00
test_quantized_models.py
test_quantized_nn_mods.py Add the quantized batch_norm3d and also batch_norm3d fused with relu operators (#34702) 2020-03-13 20:30:28 -07:00
test_quantized_tensor.py
test_serialization.py
test_sparse.py Makes floor_divide a method, adds sparse floor division (#34552) 2020-03-18 15:00:53 -07:00
test_tensorboard.py
test_tensorexpr.py [TensorExpr] Pull changes from bertmaher/pytorch_fusion. (#34842) 2020-03-17 11:02:48 -07:00
test_throughput_benchmark.py
test_torch.py randn cuda kernel complex dtype (#35056) 2020-03-20 11:19:08 -07:00
test_type_hints.py Support for Tensor Shape Type Hint (#34595) 2020-03-13 15:16:24 -07:00
test_type_info.py
test_type_promotion.py Revert D20312366: [pytorch][PR] Added type promotion logic for complex numbers 2020-03-19 05:55:57 -07:00
test_utils.py Add retry decorator and use it for Hub tests. (#34829) 2020-03-16 20:19:45 -07:00
test_xnnpack_integration.py Pass to remove prepacking ops. (#34319) 2020-03-14 12:53:31 -07:00