pytorch/test/cpp/tensorexpr
Raghavan Raman 4dec15e6d8 [nnc] Add a run method to TensorExprKernel that takes in output tensors (#69477)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/69477

This diff adds a new run method to `TensorExprKernel` which takes in
output tensors as inputs and stores the output in those given tensors.
ghstack-source-id: 146107009

Test Plan: buck test mode/dev-nosan //caffe2/test/cpp/tensorexpr:tensorexpr -- --exact 'caffe2/test/cpp/tensorexpr:tensorexpr - Kernel.RunWithAllocatedOutputs'

Reviewed By: ZolotukhinM

Differential Revision: D32823890

fbshipit-source-id: edc1f4839785124048b034060feb71cb8c1be34f
2021-12-22 00:30:15 -08:00
..
CMakeLists.txt [tensorexpr] Add memory planning to reuse intermediate buffers (#66452) 2021-12-17 01:38:02 -08:00
gtest_assert_float_eq.h
padded_buffer.cpp use irange for loops 5 (#66744) 2021-10-18 21:59:50 -07:00
padded_buffer.h use irange for loops 5 (#66744) 2021-10-18 21:59:50 -07:00
README.md
test_approx.cpp
test_aten.cpp use irange for loops 5 (#66744) 2021-10-18 21:59:50 -07:00
test_base.h
test_boundsinference.cpp use irange for loops 5 (#66744) 2021-10-18 21:59:50 -07:00
test_conv.cpp
test_cpp_codegen.cpp use irange for loops 5 (#66744) 2021-10-18 21:59:50 -07:00
test_cuda.cpp use irange for loops 5 (#66744) 2021-10-18 21:59:50 -07:00
test_dynamic_shapes.cpp [nnc] Add support for dynamic shapes in TensorExprKernel (#67861) 2021-11-05 11:18:19 -07:00
test_expr.cpp [tensorexpr] check for index out of bounds in ir_eval (#68858) 2021-12-16 09:27:45 -08:00
test_external_calls.cpp [TensorExpr] Allow for 'keepdim' argument in aten::mean in NNC's external call. (#68756) 2021-11-30 00:06:34 -08:00
test_graph_opt.cpp
test_ir_printer.cpp
test_ir_verifier.cpp
test_kernel.cpp [nnc] Add a run method to TensorExprKernel that takes in output tensors (#69477) 2021-12-22 00:30:15 -08:00
test_llvm.cpp use irange for loops 5 (#66744) 2021-10-18 21:59:50 -07:00
test_loopnest.cpp [tensorexpr] Move the allocation of intermediate buffers from TEK to CodeGen (#67143) 2021-12-17 01:37:56 -08:00
test_memdependency.cpp [tensorexpr] Move the allocation of intermediate buffers from TEK to CodeGen (#67143) 2021-12-17 01:37:56 -08:00
test_memplanning.cpp [tensorexpr] Add memory planning to reuse intermediate buffers (#66452) 2021-12-17 01:38:02 -08:00
test_ops.cpp
test_quantization.cpp [nnc][quant] Fix quantized concat (#69596) 2021-12-09 18:55:32 -08:00
test_reductions.cpp [tensorexpr] Add memory planning to reuse intermediate buffers (#66452) 2021-12-17 01:38:02 -08:00
test_registerizer.cpp
test_simplify.cpp use irange for loops 5 (#66744) 2021-10-18 21:59:50 -07:00
test_te_fuser_pass.cpp
test_type.cpp
test_utils.h
tutorial.cpp use irange for loops 5 (#66744) 2021-10-18 21:59:50 -07:00

TensorExpr C++ Tests

How to add a new test

First, create a new test file. Test files should have be placed in this directory, with a name that starts with test_, like test_foo.cpp.

Here is an example test file you can copy-paste.

#include <test/cpp/tensorexpr/test_base.h>

// Tests go in torch::jit
namespace torch {
namespace jit {

// 1. Test cases are void() functions.
// 2. They start with the prefix `test`
void testCaseOne() {
    // ...
}

void testCaseTwo() {
    // ...
}
}
}

Then, register your test in tests.h:

// Add to TH_FORALL_TESTS_CUDA instead for CUDA-requiring tests
#define TH_FORALL_TESTS(_)             \
  _(ADFormulas)                        \
  _(Attributes)                        \
  ...
  _(CaseOne)  // note that the `test` prefix is omitted.
  _(CaseTwo)

We glob all the test files together in CMakeLists.txt so that you don't have to edit it every time you add a test. Unfortunately, this means that in order to get the build to pick up your new test file, you need to re-run cmake:

python setup.py build --cmake

How do I run the tests?

The following commands assume you are in PyTorch root.

# (re)build the test binary
ninja build/bin/test_tensorexpr
# run
build/bin/test_tensorexpr --gtest_filter='glob_style_filter*'