pytorch/test/cpp/jit
Martin Yuan 829a5c8584 Disable TSAN test in LiteInterpreterConv
Summary: There's TSAN test failure. From stack it's likely related to mkldnn (https://github.com/pytorch/pytorch/issues/27497). Before the issue is resolved, disable TSAN test.

Test Plan: buck test mode/dev-tsan caffe2/test/cpp/jit:jit -- 'JitTest\.LiteInterpreterConv' --run-disabled

Reviewed By: bddppq

Differential Revision: D17846079

fbshipit-source-id: 669d6385690223d83996fb14051c39df0c521dfa
2019-10-10 08:50:59 -07:00
..
__init__.py
CMakeLists.txt Enable jit fusion on ROCm (#22872) 2019-09-05 18:22:08 -07:00
gtest.cpp
README.md
test_alias_analysis.cpp Make schema part of RegisterOperators::Options (#26114) 2019-09-13 13:52:32 -07:00
test_argument_spec.cpp Merge ProfiledTensorType and TensorType (#24284) 2019-08-20 13:01:28 -07:00
test_autodiff.cpp Merge ProfiledTensorType and TensorType (#24284) 2019-08-20 13:01:28 -07:00
test_base.h
test_class_import.cpp Fix circular deps in loading (#26758) 2019-09-26 11:39:16 -07:00
test_class_parser.cpp
test_code_template.cpp
test_constant_pooling.cpp
test_constant_propagation.cpp Whenever possible, use function pointers rather than std::function to represent Operation's. (#26560) 2019-09-21 20:51:24 -07:00
test_create_autodiff_subgraphs.cpp
test_custom_operators.cpp
test_dce.cpp
test_fuser.cpp Remove unused DynamicDAG class. 2019-08-20 16:17:59 -07:00
test_graph_executor.cpp
test_inliner.cpp Use optimized graph in Inline (essentially, making Inline recursive now). (#26489) 2019-09-24 00:22:29 -07:00
test_interpreter.cpp
test_ir.cpp
test_irparser.cpp
test_ivalue.cpp Add tuple constructor + to<std::tuple<Args...>> (#26668) 2019-09-30 11:00:48 -07:00
test_lite_interpreter.cpp Disable TSAN test in LiteInterpreterConv 2019-10-10 08:50:59 -07:00
test_misc.cpp more profiler changes in C++ before enabling checkScript changes 2019-10-03 10:39:54 -07:00
test_netdef_converter.cpp
test_peephole_optimize.cpp Removes SymbolicVariable (#25077) 2019-08-31 11:19:50 -07:00
test_qualified_name.cpp
test_save_load.cpp Fix circular deps in loading (#26758) 2019-09-26 11:39:16 -07:00
test_schema_matching.cpp Whenever possible, use function pointers rather than std::function to represent Operation's. (#26560) 2019-09-21 20:51:24 -07:00
test_subgraph_matcher.cpp
test_subgraph_rewriter.cpp Add filter function to subgraph rewriter runGraph (#26223) 2019-09-18 16:34:50 -07:00
test_subgraph_utils.cpp
test_utils.cpp autodiff changes to enable profiling 2019-09-25 10:11:44 -07:00
test_utils.h
tests.h Add OPN instruction and vararg operator table (#27104) 2019-10-04 09:35:53 -07:00
tests_setup.py
torch_python_test.cpp Fix misuages for TORCH_CHECK/TORCH_INTERNAL_ASSERT with string (#26897) 2019-09-27 13:45:19 -07:00

JIT C++ Tests

How to add a new test

First, create a new test file. Test files should have be placed in this directory, with a name that starts with test_, like test_foo.cpp.

Here is an example test file you can copy-paste.

#include <test/cpp/jit/test_base.h>

// Tests go in torch::jit
namespace torch {
namespace jit {

// 1. Test cases are void() functions.
// 2. They start with the prefix `test`
void testCaseOne() {
    // ...
}

void testCaseTwo() {
    // ...
}
}
}

Then, register your test in tests.h:

// Add to TH_FORALL_TESTS_CUDA instead for CUDA-requiring tests
#define TH_FORALL_TESTS(_)             \
  _(ADFormulas)                        \
  _(Attributes)                        \
  ...
  _(CaseOne)  // note that the `test` prefix is omitted.
  _(CaseTwo)

We glob all the test files together in CMakeLists.txt so that you don't have to edit it every time you add a test. Unfortunately, this means that in order to get the build to pick up your new test file, you need to re-run cmake:

python setup.py build --cmake

Why do we have two different test runners?

We have two different ways of running our cpp tests:

  1. With gtest, from a standalone binary.
  2. With Python, from TestJit.test_cpp and TestJit.test_cpp_cuda (in test/test_jit.py)

We want both because we need to test things from a pure-C++ environment and with all our various Python patch-points enabled.

How do I run the tests?

The following commands assume you are in PyTorch root.

  1. With gtest:
    # (re)build the test binary
    ninja build/bin/test_jit
    # run
    build/bin/test_jit --gtest_filter='glob_style_filter*'
    
  2. With Python:
    python test/test_jit.py TestJit.test_cpp TestJit.test_cpp_cuda