pytorch/test/cpp/jit
Elias Ellison ae286d81e0 [JIT] improve alias analysis for list constructs (#39111)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39111

In our present alias analysis, we consider any Value that enter another container as entering the heap, and thus aliasing all other heap values of the same type. There are a number of advantages to this approach:
- it is not to hard to maintain the aliasDb implementation
- it is much easier from an op schema perspective - there are many composite list ops registered internally and externally that would be tricky to register and get right if we did something more complicated
- It limits the size of the AliasDb, because a container of size 10 only contains a single memory dag element instead of 10 elements.

The downside is that we have are unable to handle the simple and extremely common case of a list of tensors being used in an ATen op.

In an example like:

```
 def foo(input):
    x = torch.tensor([1, 2, 3, 4])
    y = [x, x]
    input.add_(1)
    return torch.cat(y)
```

we will consider x to be written to. any write to any wildcard element (an element that enters a tuple, an element that is taken from a list) will mark x as written to. This can be limiting for our ability to create a functional subset and fuse graphs - as a result, 4 of TorchVision classification models could not be functionalized.

Test Plan: Imported from OSS

Reviewed By: SplitInfinity

Differential Revision: D23828003

Pulled By: eellison

fbshipit-source-id: 9109fcb6f2ca20ca897cae71683530285da9d537
2020-09-22 09:38:59 -07:00
..
__init__.py remediation of S205607 2020-07-17 17:19:47 -07:00
CMakeLists.txt [jit] gtest-ify test_alias_analysis.cpp (#45018) 2020-09-21 12:19:37 -07:00
gtest.cpp
README.md
test_alias_analysis.cpp [JIT] improve alias analysis for list constructs (#39111) 2020-09-22 09:38:59 -07:00
test_argument_spec.cpp
test_autodiff.cpp
test_backend.cpp [JIT] Add out-of-source-tree to_backend tests (#41145) 2020-07-14 10:57:04 -07:00
test_base.cpp
test_base.h
test_class_import.cpp Generalize constant_table from tensor only to ivalue (#40718) 2020-07-09 09:09:40 -07:00
test_class_parser.cpp
test_class_type.cpp
test_cleanup_passes.cpp Reduce instability in runCleanUpPasses by reordering passes. (#41891) 2020-07-24 11:39:20 -07:00
test_code_template.cpp
test_constant_pooling.cpp [JIT] Teach IRPrinter and IRParser to handle 'requires_grad' and 'device' as a part of type info. (#41507) 2020-07-17 10:27:04 -07:00
test_create_autodiff_subgraphs.cpp
test_custom_class.cpp [FX] Native callables in FX lowering (#43426) 2020-08-27 00:00:03 -07:00
test_custom_operators.cpp Operator generator based on templated selective build. (#43456) 2020-08-27 07:26:07 -07:00
test_dce.cpp
test_fuser.cpp [Codemod][GleanFbcode] Remove dead includes in caffe2/test (#43953) 2020-09-01 21:48:28 -07:00
test_gpu.cpp [Codemod][GleanFbcode] Remove dead includes in caffe2/test (#43953) 2020-09-01 21:48:28 -07:00
test_graph_executor.cpp
test_inliner.cpp
test_interface.cpp Generalize constant_table from tensor only to ivalue (#40718) 2020-07-09 09:09:40 -07:00
test_interpreter.cpp [jit] gtest-ify test_alias_analysis.cpp (#45018) 2020-09-21 12:19:37 -07:00
test_ir.cpp
test_irparser.cpp [JIT] Support partially specified sizes/strides in IRParser (#44113) 2020-09-09 14:45:51 -07:00
test_jit_type.cpp
test_lite_interpreter.cpp [pytorch] Add variadic run_method for lite intepreter (#44337) 2020-09-13 13:26:30 -07:00
test_lite_trainer.cpp Add lite SequentialSampler to torch mobile (#43299) 2020-08-24 09:45:24 -07:00
test_memory_dag.cpp [jit] gtest-ify test_alias_analysis.cpp (#45018) 2020-09-21 12:19:37 -07:00
test_misc.cpp [jit] Pull (most) tests out of libtorch_python (#44795) 2020-09-18 14:04:40 -07:00
test_mobile_type_parser.cpp
test_module_api.cpp [jit] make clone works for interface type (#42121) 2020-07-31 10:24:27 -07:00
test_peephole_optimize.cpp
test_qualified_name.cpp
test_save_load.cpp
test_schema_matching.cpp
test_subgraph_matcher.cpp
test_subgraph_rewriter.cpp
test_subgraph_utils.cpp [JIT] Always map node output in vmap (#43988) 2020-09-02 10:30:43 -07:00
test_utils.cpp
test_utils.h
tests.h [jit] gtest-ify test_alias_analysis.cpp (#45018) 2020-09-21 12:19:37 -07:00
tests_setup.py
torch_python_test.cpp [jit] Pull (most) tests out of libtorch_python (#44795) 2020-09-18 14:04:40 -07:00

JIT C++ Tests

How to add a new test

First, create a new test file. Test files should have be placed in this directory, with a name that starts with test_, like test_foo.cpp.

Here is an example test file you can copy-paste.

#include <test/cpp/jit/test_base.h>

// Tests go in torch::jit
namespace torch {
namespace jit {

// 1. Test cases are void() functions.
// 2. They start with the prefix `test`
void testCaseOne() {
    // ...
}

void testCaseTwo() {
    // ...
}
}
}

Then, register your test in tests.h:

// Add to TH_FORALL_TESTS_CUDA instead for CUDA-requiring tests
#define TH_FORALL_TESTS(_)             \
  _(ADFormulas)                        \
  _(Attributes)                        \
  ...
  _(CaseOne)  // note that the `test` prefix is omitted.
  _(CaseTwo)

We glob all the test files together in CMakeLists.txt so that you don't have to edit it every time you add a test. Unfortunately, this means that in order to get the build to pick up your new test file, you need to re-run cmake:

python setup.py build --cmake

Why do we have two different test runners?

We have two different ways of running our cpp tests:

  1. With gtest, from a standalone binary.
  2. With Python, from TestJit.test_cpp and TestJit.test_cpp_cuda (in test/test_jit.py)

We want both because we need to test things from a pure-C++ environment and with all our various Python patch-points enabled.

How do I run the tests?

The following commands assume you are in PyTorch root.

  1. With gtest:
    # (re)build the test binary
    ninja build/bin/test_jit
    # run
    build/bin/test_jit --gtest_filter='glob_style_filter*'
    
  2. With Python:
    python test/test_jit.py TestJit.test_cpp TestJit.test_cpp_cuda