pytorch/test/cpp/jit
Han Qi (qihqi) 3822a472ef Python function to extract information on mobile::Module from flatbuffer (#77624)
Summary:
Includes following refactor:
1. common loading on operator validation that is dup'd in pickle and
   flatbuffer loader moved to function.h/cpp
2. Allow loading of a function without wiring operator.

This function will be used to implement get_bundled_input and friends
for flatbuffer.

Test Plan: contbuild & OSS CI, see 69fa49f123

Reviewed By: cccclai

Differential Revision: D36348549

Pull Request resolved: https://github.com/pytorch/pytorch/pull/77624
Approved by: https://github.com/cccclai
2022-05-18 00:42:57 +00:00
..
upgrader_models Add upgrader related logic to flatbuffer (#71451) 2022-04-17 18:51:23 +00:00
__init__.py
CMakeLists.txt Add cuda-11.3+clang9 build workflow (take 2) 2022-04-11 17:13:01 +00:00
README.md
script_module_v4.ptl
script_module_v5.ptl
script_module_v6.ptl
source_range_test.cpp Reland "Make debug_pkl smaller by only emitting unique traces." (#73368) 2022-04-18 22:34:21 +00:00
test_add_if_then_else.cpp [JIT][SR] Introduce prim::IfThenElse (#72587) 2022-02-17 18:22:48 +00:00
test_alias_analysis.cpp
test_argument_spec.cpp
test_autodiff.cpp Clean up profiling mode and profiling executor strategy (#73875) 2022-03-29 18:38:51 +00:00
test_backend.cpp Reland "Make debug_pkl smaller by only emitting unique traces." (#73368) 2022-04-18 22:34:21 +00:00
test_backend_compiler_lib.cpp [Pytorch Edge] Lean Runtime Test 2022-02-24 18:40:47 +00:00
test_backend_compiler_preprocess.cpp
test_backend_lib.cpp
test_class_import.cpp
test_class_parser.cpp
test_class_type.cpp
test_cleanup_passes.cpp
test_code_template.cpp
test_concat_opt.cpp
test_constant_pooling.cpp
test_create_autodiff_subgraphs.cpp
test_cs_debug_info_serialization.cpp
test_custom_class.cpp
test_custom_class_registrations.cpp
test_custom_class_registrations.h
test_custom_operators.cpp
test_dce.cpp
test_exception.cpp
test_file_format.cpp [PyTorchEdge] Add getFileFormat() so we can differentiate Zip/Pickle from Flatbuffer (#73707) 2022-03-04 19:35:41 +00:00
test_flatbuffer.cpp [Bootcamp]Add option for flatbuffer loader to copy memory to individual tensors (#76986) 2022-05-09 17:29:30 +00:00
test_fuser.cpp [NVFuser] prep for on-by-default 2022-05-06 18:18:53 +00:00
test_graph_executor.cpp
test_graph_iterator.cpp Fix sign-compare violations in cpp tests 2022-04-04 23:05:31 +00:00
test_inliner.cpp
test_interface.cpp
test_interpreter.cpp
test_interpreter_async.pt
test_ir.cpp
test_irparser.cpp
test_jit_logging_levels.cpp
test_jit_type.cpp
test_lite_interpreter.cpp Python function to extract information on mobile::Module from flatbuffer (#77624) 2022-05-18 00:42:57 +00:00
test_lite_interpreter_direct.cpp
test_lite_trainer.cpp [PyTorchEdge] Make _load_parameters() handle flatbuffer inputs (#74580) 2022-03-30 20:39:58 +00:00
test_load_upgraders.cpp
test_memory_dag.cpp
test_misc.cpp Make SymInt constructor explicit 2022-05-17 22:28:35 +00:00
test_mobile_type_parser.cpp
test_module_api.cpp
test_op_replacement.cpp
test_peephole_optimize.cpp
test_qualified_name.cpp
test_save_load.cpp Added directory check before saving in C++ API 2022-04-22 20:04:41 +00:00
test_schema_matching.cpp
test_script_profile.cpp
test_shape_analysis.cpp Relanding shape cache (75400) (#75710) 2022-04-13 17:17:30 +00:00
test_stack_opt.cpp
test_subgraph_matcher.cpp
test_subgraph_rewriter.cpp
test_subgraph_utils.cpp
test_union.cpp
test_upgrader_utils.cpp
test_utils.cpp
test_utils.h Reland "Make debug_pkl smaller by only emitting unique traces." (#73368) 2022-04-18 22:34:21 +00:00
tests_setup.py
torch_python_test.cpp

JIT C++ Tests

Adding a new test

First, create a new test file. Test files should have be placed in this directory, with a name that starts with test_, like test_foo.cpp.

In general a single test suite

Add your test file to the JIT_TEST_SRCS list in test/cpp/jit/CMakeLists.txt.

A test file may look like:

#include <gtest/gtest.h>

using namespace ::torch::jit

TEST(FooTest, BarBaz) {
   // ...
}

// Append '_CUDA' to the test case name will automatically filter it out if CUDA
// is not compiled.
TEST(FooTest, NeedsAGpu_CUDA) {
   // ...
}

// Similarly, if only one GPU is detected, tests with `_MultiCUDA` at the end
// will not be run.
TEST(FooTest, NeedsMultipleGpus_MultiCUDA) {
   // ...
}

Building and running the tests

The following commands assume you are in PyTorch root.

# ... Build PyTorch from source, e.g.
python setup.py develop
# (re)build just the binary
ninja -C build bin/test_jit
# run tests
build/bin/test_jit --gtest_filter='glob_style_filter*'