pytorch/test/cpp/jit
Elias Ellison 39be20f259 [JIT][NNC] Add handling of strides to dynamic shape support. (#70464)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/70464

Add handling of strided input tensors to dynamic fusion. This is done with the same set of input striding specializations as https://github.com/pytorch/pytorch/pull/60684/:
```
  S_ONE, // STRIDE_ONE: packed
  S_CONT, // STRIDE_CONTIGUOUS: stride[i + 1] * sizes[i + 1]
  S_TRAN_CONT, // STRIDE_TRANSPOSED_CONTIGUOUS: stride[i-1] * sizes[i-1]
  S_AS_ARG, // STRIDE_AS_ARG: stride passed in as runtime value
```
and then two additional specializations for a) contiguous tensor and b) channels-last tensor. channels-last is a common case and we should optimize for it. additionally, tensors natively store whether they are contiguous/channels-last contiguous, which makes it faster to check if tensors follow this pattern.

Output striding will be done in a follow up.

The striding is stored on both the TensorGroup node and on the guard node. The striding descriptors are stored as a vector of strings on the node for debugability and to make use of storing ivalues as attributes on nodes.

As an example:

```

%8 : Double(10, 11, 12, 13, strides=[1716, 1, 143, 11], requires_grad=0, device=cpu) = prim::TensorExprGroup_0[symbolic_shape_inputs=[-37, -36, -35, -34], striding_inputs_desc=[["TENSOR_CONT_CHANNELS_LAST"]](%x, %24, %23, %22, %21)```
```

Test Plan: Imported from OSS

Reviewed By: navahgar

Differential Revision: D33458649

Pulled By: eellison

fbshipit-source-id: c42616d3c683d70f6258180d23d3841a31a6030d
2022-01-12 09:11:31 -08:00
..
upgrader_models [Operator Versioning][Edge] Add old models and unittest (#67726) 2021-12-01 18:46:30 -08:00
__init__.py
CMakeLists.txt Bump version number to 7 and compile old operators with old schema (#68358) 2022-01-05 23:57:22 -08:00
README.md
script_module_v4.ptl
script_module_v5.ptl
script_module_v6.ptl
test_alias_analysis.cpp [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily arc lint --take CLANGFORMAT 2022-01-12 04:16:43 -08:00
test_argument_spec.cpp
test_autodiff.cpp
test_backend.cpp Back out "[pytorch][PR] Add ability for a mobile::Module to save as flatbuffer" (#69796) 2021-12-10 21:29:53 -08:00
test_backend_compiler_lib.cpp [Profiler] Clean up profiler includes. (#69421) 2021-12-15 12:50:24 -08:00
test_backend_compiler_preprocess.cpp
test_backend_lib.cpp
test_class_import.cpp
test_class_parser.cpp
test_class_type.cpp
test_cleanup_passes.cpp
test_code_template.cpp
test_concat_opt.cpp
test_constant_pooling.cpp
test_create_autodiff_subgraphs.cpp
test_cs_debug_info_serialization.cpp
test_custom_class.cpp
test_custom_class_registrations.cpp
test_custom_class_registrations.h
test_custom_operators.cpp
test_dce.cpp
test_fuser.cpp
test_gpu.cpp Nvfuser code bump 12 5 (#69964) 2021-12-16 08:28:54 -08:00
test_gpu_shift.cpp Nvfuser code bump 12 5 (#69964) 2021-12-16 08:28:54 -08:00
test_gpu_validator.h Nvfuser code bump 12 5 (#69964) 2021-12-16 08:28:54 -08:00
test_graph_executor.cpp
test_graph_iterator.cpp
test_inliner.cpp
test_interface.cpp
test_interpreter.cpp
test_interpreter_async.pt
test_ir.cpp
test_irparser.cpp
test_jit_logging_levels.cpp
test_jit_type.cpp
test_lite_interpreter.cpp [jit][edge] Use dynamic type instead of union types for schema parsers. (#70509) 2022-01-11 20:14:25 -08:00
test_lite_interpreter_direct.cpp
test_lite_trainer.cpp
test_load_upgraders.cpp Bump version number to 7 and compile old operators with old schema (#68358) 2022-01-05 23:57:22 -08:00
test_memory_dag.cpp
test_misc.cpp [JIT][NNC] Add handling of strides to dynamic shape support. (#70464) 2022-01-12 09:11:31 -08:00
test_mobile_type_parser.cpp [jit][edge] Migrate base types to DynamicType on mobile. (#70233) 2022-01-11 13:53:29 -08:00
test_module_api.cpp [PyTorch] Add Enum to IValue Deepcopy (#69937) 2021-12-30 07:52:22 -08:00
test_op_replacement.cpp Add graph op replacement pass (#69915) 2021-12-25 13:03:19 -08:00
test_peephole_optimize.cpp
test_qualified_name.cpp
test_save_load.cpp
test_schema_matching.cpp
test_script_profile.cpp
test_shape_analysis.cpp [JIT][NNC] Add handling of strides to dynamic shape support. (#70464) 2022-01-12 09:11:31 -08:00
test_stack_opt.cpp
test_subgraph_matcher.cpp
test_subgraph_rewriter.cpp
test_subgraph_utils.cpp
test_union.cpp
test_upgrader_utils.cpp Add utility methods to find an upgrader (#68355) 2021-12-24 12:23:04 -08:00
test_utils.cpp
test_utils.h
tests_setup.py
torch_python_test.cpp Remove WindowsTorchApiMacro.h in favor of Export.h (#69585) 2021-12-09 17:30:09 -08:00

JIT C++ Tests

Adding a new test

First, create a new test file. Test files should have be placed in this directory, with a name that starts with test_, like test_foo.cpp.

In general a single test suite

Add your test file to the JIT_TEST_SRCS list in test/cpp/jit/CMakeLists.txt.

A test file may look like:

#include <gtest/gtest.h>

using namespace ::torch::jit

TEST(FooTest, BarBaz) {
   // ...
}

// Append '_CUDA' to the test case name will automatically filter it out if CUDA
// is not compiled.
TEST(FooTest, NeedsAGpu_CUDA) {
   // ...
}

// Similarly, if only one GPU is detected, tests with `_MultiCUDA` at the end
// will not be run.
TEST(FooTest, NeedsMultipleGpus_MultiCUDA) {
   // ...
}

Building and running the tests

The following commands assume you are in PyTorch root.

# ... Build PyTorch from source, e.g.
python setup.py develop
# (re)build just the binary
ninja -C build bin/test_jit
# run tests
build/bin/test_jit --gtest_filter='glob_style_filter*'