pytorch/test/cpp
Nick Gibson f2f8027760 [TensorExpr] simplify trivial adds/subs/muls even in Float (#37960)
Summary:
The IR Simplifier early exits when working with dtypes that are not safe to reorder. There are some cases where we still want to simplify ops in these dtypes: x + 0,  x - 0, x * 0 and x * 1.  It's safe to eliminate the op here and it reduces clutter in the expr.

Also added a quick simplification of casts which do nothing (their type is the same as the underlying).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37960

Differential Revision: D21457736

Pulled By: nickgg

fbshipit-source-id: 40e20a3b55fc1afb2ec50071812238a08bded2ac
2020-05-07 17:23:47 -07:00
..
api [aten] Pass std::function<> to thread_pool by value, instead of const ref. (#37681) 2020-05-05 08:41:38 -07:00
common
dist_autograd Fix/relax CMake linter rules (#35574) 2020-03-27 16:52:33 -07:00
jit Make profiler thread local (#36291) 2020-05-07 14:52:49 -07:00
rpc [TensorPipe] Use the new multi-payload message API (#37919) 2020-05-07 02:52:30 -07:00
tensorexpr [TensorExpr] simplify trivial adds/subs/muls even in Float (#37960) 2020-05-07 17:23:47 -07:00
__init__.py