mirror of
https://github.com/saymrwulf/pytorch.git
synced 2026-05-14 20:57:59 +00:00
add AutoNonVariableTypeMode for USE_STATIC_DISPATCH on JIT->ATen path (#27274)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/27274 This is yet another fix to address #26764. PR #26908 toggles NonVariableTypeMode in ATen dispatcher, which is where USE_STATIC_DISPATCH takes place thus it's most logically sound place to do such tweaks. However, we observed nontrivial perf regression due to this fix. Turns out the numel() tensor method gets called in several for-loops thus incurs ~7M thread_local updates in a single forward call: ``` 7173330 numel 558 size 416 q_scale 302 _empty_affine_quantized 288 contiguous 257 q_zero_point 216 qscheme 173 empty 110 set_ 105 as_strided 104 permute ... ``` As numel() is not called from a single place so a natural workaround is to update function_wrapper.py so that it only adds the guard on gen_namespace_function() case and ignore the gen_tensor_method() case. But some tensor methods are actually being called from JIT side directly (e.g. "aten::eq_" -> "(self).eq_") so the only "band aid" left on the table is to insert guard on JIT->aten path as originally did on #26868 - this is a simplified version of it as it doesn't hurt to extend the NonVariableMode scope a little bit to also cover stack drop/pack calls. On Android we only expose JIT API so we don't need worry about TensorMethods being called directly. On iOS we don't provide a wrapper yet but we can mention this caveat in the doc. Hopefully by the time it's widely used we can finish Variable/Tensor unification and remove all these hacks. Test Plan: - Verified it runs quantized/fp32 MobileNetV2 models; - Verified it fixes the perf regression (revert #26908 separately); Differential Revision: D17732489 Pulled By: ljk53 fbshipit-source-id: c14ca66aebc6b6f17ad6efac7ca47f9487c98de5
This commit is contained in:
parent
493c900810
commit
7bd7a3d806
1 changed files with 6 additions and 0 deletions
|
|
@ -160,9 +160,15 @@ const auto options = TensorOptions()
|
|||
auto result_ = (${first}).${name}(${args_with_tensor_options});
|
||||
""")
|
||||
|
||||
# Adding `AutoNonVariableTypeMode` guard for `USE_STATIC_DISPATCH` case is kinda
|
||||
# hack to address issue #26764. TODO: remove this hack after Variable/Tensor
|
||||
# unification (#23032) is done.
|
||||
CONSTRUCTOR = CodeTemplate("""\
|
||||
[](Stack & stack) {
|
||||
${lvalues}
|
||||
#ifdef USE_STATIC_DISPATCH
|
||||
at::AutoNonVariableTypeMode non_var_type_mode(true);
|
||||
#endif
|
||||
${call}
|
||||
drop(stack, ${num_inputs});
|
||||
pack(stack, std::move(result_));
|
||||
|
|
|
|||
Loading…
Reference in a new issue