pytorch/test/cpp/jit
Michael Suo 9e32a1f5cd [wip] update graph fuser aliasdb in-place (#37106)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37106

Recomputing the aliasdb on every fusion iteration + in every subblock
is hugely expensive. Instead, update it in-place when doing fusion.

The graph fuser pass operates by pushing nodes into a fusion group. So
we start with
```
x, y = f(a, b, c)
```

and end with:
```
x_out, y_out = prim::fusionGroup(a, b, c)
   x_in, y_in = f(a_in, b_in, c_in)
   -> x_in, y_in
```

We destroy the `x` and `y` `Value*`s in the process. This operation is
easy to express as an update to the aliasDb--`x_out` just takes on all
the aliasing information `x` used to have. In particular, since we know
`f` and `prim::fusionGroup` are purely functional, we don't have to mess
with any write information.

This PR is the bare minimum to get this working, in the interest of
unscrewing the compilation times ASAP.

Followups I want to do:
- We don't have a way of expressing deletion of values in AliasDb. In
`graph_fuser.cpp` we sometimes construct nodes that we end up throwing
away, and we are littering `MemoryDAG` with references to dangling
pointers. Because of the way the pass works, it's fine, but this is
fragile so I want to fix it.
- We should decouple alias analysis from write tracking, to simplify the
job of keeping the write caches consistent as we mutate the aliasing
information.
- the tensorexpr fuser doesn't do this and thus is incorrect today, we
need to update it to work.

Test Plan: Imported from OSS

Differential Revision: D21219179

Pulled By: suo

fbshipit-source-id: 8ae5397b3a0ad90edec2fbc555647091f1ad5284
2020-04-30 22:21:35 -07:00
..
__init__.py
CMakeLists.txt [cmake] add USE_SYSTEM_{XNNPACK,ONNX} options. (#37501) 2020-04-29 09:26:16 -07:00
gtest.cpp
README.md
test_alias_analysis.cpp [jit] speed up alias analysis (#36345) 2020-04-30 18:27:41 -07:00
test_argument_spec.cpp [JIT] clang-format JIT code (#35115) 2020-03-26 11:24:51 -07:00
test_autodiff.cpp [JIT] clang-format JIT code (#35115) 2020-03-26 11:24:51 -07:00
test_base.cpp [pytorch] Fix fblearner flow compiling errors (#35902) 2020-04-02 14:52:48 -07:00
test_base.h [jit] do the code reorg (#33851) 2020-02-27 13:02:51 -08:00
test_class_import.cpp [jit] fix named tuples as attributes (#37251) 2020-04-24 17:48:44 -07:00
test_class_parser.cpp [jit] kill script namespace (#34515) 2020-03-11 23:32:48 -07:00
test_class_type.cpp [jit] kill script namespace (#34515) 2020-03-11 23:32:48 -07:00
test_code_template.cpp [jit] do the code reorg (#33851) 2020-02-27 13:02:51 -08:00
test_constant_pooling.cpp Teach IRParser to parse strides along with sizes in a tensor type. (#36951) 2020-04-21 17:27:15 -07:00
test_create_autodiff_subgraphs.cpp
test_custom_class.cpp Back out "Revert D21089648: Put TORCH_LIBRARY in torch/library.h; add custom class API" 2020-04-22 09:18:23 -07:00
test_custom_operators.cpp [JIT] clang-format JIT code (#35115) 2020-03-26 11:24:51 -07:00
test_dce.cpp [jit] kill script namespace (#34515) 2020-03-11 23:32:48 -07:00
test_fuser.cpp [wip] update graph fuser aliasdb in-place (#37106) 2020-04-30 22:21:35 -07:00
test_gpu.cpp Teach IRParser to parse strides along with sizes in a tensor type. (#36951) 2020-04-21 17:27:15 -07:00
test_graph_executor.cpp improved TorchScript traceback (#33834) 2020-03-03 12:27:38 -08:00
test_inliner.cpp [JIT] clang-format JIT code (#35115) 2020-03-26 11:24:51 -07:00
test_interface.cpp [jit] fix named tuples as attributes (#37251) 2020-04-24 17:48:44 -07:00
test_interpreter.cpp improved TorchScript traceback (#33834) 2020-03-03 12:27:38 -08:00
test_ir.cpp [jit] kill script namespace (#34515) 2020-03-11 23:32:48 -07:00
test_irparser.cpp Teach IRParser to parse strides along with sizes in a tensor type. (#36951) 2020-04-21 17:27:15 -07:00
test_jit_type.cpp [jit] do the code reorg (#33851) 2020-02-27 13:02:51 -08:00
test_lite_interpreter.cpp Add DICT_CONSTRUCT and NAMED_TUPLE_CONSTRUCT to lite interpreter (#36015) 2020-04-04 09:52:58 -07:00
test_misc.cpp [resubmit] Enable global observers API (#37382) 2020-04-28 10:49:31 -07:00
test_mobile_type_parser.cpp [JIT] clang-format JIT code (#35115) 2020-03-26 11:24:51 -07:00
test_module_api.cpp [jit] __deepcopy__ for RecursiveScriptModule (#32684) 2020-04-28 18:47:11 -07:00
test_peephole_optimize.cpp [JIT] Dont optimize shape peepholes on inline (#36404) 2020-04-15 17:49:48 -07:00
test_qualified_name.cpp
test_save_load.cpp Fix clang-format (#35969) 2020-04-03 14:36:20 -07:00
test_schema_matching.cpp [JIT] clang-format JIT code (#35115) 2020-03-26 11:24:51 -07:00
test_subgraph_matcher.cpp [JIT] clang-format JIT code (#35115) 2020-03-26 11:24:51 -07:00
test_subgraph_rewriter.cpp [JIT] clang-format JIT code (#35115) 2020-03-26 11:24:51 -07:00
test_subgraph_utils.cpp
test_utils.cpp [JIT] clang-format JIT code (#35115) 2020-03-26 11:24:51 -07:00
test_utils.h [JIT] clang-format JIT code (#35115) 2020-03-26 11:24:51 -07:00
tests.h [wip] update graph fuser aliasdb in-place (#37106) 2020-04-30 22:21:35 -07:00
tests_setup.py
torch_python_test.cpp Enable tensorexpr cpp tests in CI. try #2 (#35454) 2020-03-27 12:09:55 -07:00

JIT C++ Tests

How to add a new test

First, create a new test file. Test files should have be placed in this directory, with a name that starts with test_, like test_foo.cpp.

Here is an example test file you can copy-paste.

#include <test/cpp/jit/test_base.h>

// Tests go in torch::jit
namespace torch {
namespace jit {

// 1. Test cases are void() functions.
// 2. They start with the prefix `test`
void testCaseOne() {
    // ...
}

void testCaseTwo() {
    // ...
}
}
}

Then, register your test in tests.h:

// Add to TH_FORALL_TESTS_CUDA instead for CUDA-requiring tests
#define TH_FORALL_TESTS(_)             \
  _(ADFormulas)                        \
  _(Attributes)                        \
  ...
  _(CaseOne)  // note that the `test` prefix is omitted.
  _(CaseTwo)

We glob all the test files together in CMakeLists.txt so that you don't have to edit it every time you add a test. Unfortunately, this means that in order to get the build to pick up your new test file, you need to re-run cmake:

python setup.py build --cmake

Why do we have two different test runners?

We have two different ways of running our cpp tests:

  1. With gtest, from a standalone binary.
  2. With Python, from TestJit.test_cpp and TestJit.test_cpp_cuda (in test/test_jit.py)

We want both because we need to test things from a pure-C++ environment and with all our various Python patch-points enabled.

How do I run the tests?

The following commands assume you are in PyTorch root.

  1. With gtest:
    # (re)build the test binary
    ninja build/bin/test_jit
    # run
    build/bin/test_jit --gtest_filter='glob_style_filter*'
    
  2. With Python:
    python test/test_jit.py TestJit.test_cpp TestJit.test_cpp_cuda