pytorch/test/cpp/jit
Han Qi b8ba4802fe Add an option to skip loading of debug traces (#91430)
Summary:
Debug traces consumes lots of memory especially for small models.

Test Plan:
Unit test

Reviewers:

Subscribers:

Tasks:

Tags:

Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/91430
Approved by: https://github.com/davidberard98
2022-12-29 22:53:17 +00:00
..
upgrader_models
__init__.py
CMakeLists.txt [NVFuser] Upstream push 1026 (#87779) 2022-11-04 20:04:34 +00:00
README.md
script_module_v4.ptl
script_module_v5.ptl
script_module_v6.ptl
source_range_test.cpp
test_add_if_then_else.cpp
test_alias_analysis.cpp
test_argument_spec.cpp
test_autodiff.cpp
test_backend.cpp
test_backend_compiler_lib.cpp
test_backend_compiler_preprocess.cpp
test_backend_lib.cpp
test_class_import.cpp
test_class_parser.cpp
test_class_type.cpp
test_cleanup_passes.cpp
test_code_template.cpp
test_concat_opt.cpp
test_constant_pooling.cpp
test_create_autodiff_subgraphs.cpp
test_cs_debug_info_serialization.cpp
test_custom_class.cpp
test_custom_class_registrations.cpp Fix typos in messages under test (#89121) 2022-11-17 01:55:03 +00:00
test_custom_class_registrations.h
test_custom_operators.cpp
test_dce.cpp
test_exception.cpp
test_file_format.cpp
test_flatbuffer.cpp [codev] Make backport work with flatbuffer models (#88127) 2022-11-01 16:11:30 +00:00
test_fuser.cpp
test_graph_executor.cpp [codemod][llvm15] LLVM-15 fixes for caffe2/test/cpp/jit/test_graph_executor.cpp (#89936) 2022-12-01 03:30:31 +00:00
test_graph_iterator.cpp
test_inliner.cpp
test_interface.cpp
test_interpreter.cpp
test_interpreter_async.pt
test_ir.cpp
test_irparser.cpp
test_jit_logging_levels.cpp Fix CheckOutputStreamSetting on JitLoggingTest as it failed if logging wasn't enabled. (#82722) 2022-11-23 22:46:29 +00:00
test_jit_type.cpp
test_lite_interpreter.cpp Clean up dependancy for flatbuffer_loader (#86041) 2022-12-08 03:48:04 +00:00
test_lite_interpreter_direct.cpp
test_lite_trainer.cpp Clean up dependancy for flatbuffer_loader (#86041) 2022-12-08 03:48:04 +00:00
test_load_upgraders.cpp
test_memory_dag.cpp
test_misc.cpp set -Wsuggest-override for builds (#89852) 2022-12-19 22:08:47 +00:00
test_mobile_type_parser.cpp
test_module_api.cpp [codemod][llvm15] LLVM-15 fixes for caffe2/test/cpp/jit/test_module_api.cpp (#89938) 2022-12-04 12:50:14 +00:00
test_op_replacement.cpp
test_peephole_optimize.cpp
test_qualified_name.cpp
test_save_load.cpp Add an option to skip loading of debug traces (#91430) 2022-12-29 22:53:17 +00:00
test_schema_info.cpp
test_schema_matching.cpp
test_script_profile.cpp
test_shape_analysis.cpp
test_stack_opt.cpp
test_subgraph_matcher.cpp
test_subgraph_rewriter.cpp
test_subgraph_utils.cpp
test_union.cpp
test_upgrader_utils.cpp
test_utils.cpp
test_utils.h
tests_setup.py
torch_python_test.cpp

JIT C++ Tests

Adding a new test

First, create a new test file. Test files should have be placed in this directory, with a name that starts with test_, like test_foo.cpp.

In general a single test suite

Add your test file to the JIT_TEST_SRCS list in test/cpp/jit/CMakeLists.txt.

A test file may look like:

#include <gtest/gtest.h>

using namespace ::torch::jit

TEST(FooTest, BarBaz) {
   // ...
}

// Append '_CUDA' to the test case name will automatically filter it out if CUDA
// is not compiled.
TEST(FooTest, NeedsAGpu_CUDA) {
   // ...
}

// Similarly, if only one GPU is detected, tests with `_MultiCUDA` at the end
// will not be run.
TEST(FooTest, NeedsMultipleGpus_MultiCUDA) {
   // ...
}

Building and running the tests

The following commands assume you are in PyTorch root.

# ... Build PyTorch from source, e.g.
python setup.py develop
# (re)build just the binary
ninja -C build bin/test_jit
# run tests
build/bin/test_jit --gtest_filter='glob_style_filter*'