pytorch/test/cpp/jit
Pavithran Ramachandran 7aaa75af05 Extending _get_bytecode_version to support flatbuffers format (#75021)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75021

Extending `_get_bytecode_version` to support flatbuffers.
ghstack-source-id: 152771695

(Note: this ignores all push blocking failures!)

Test Plan:
```
~/fbsource/xplat] cd ~/fbsource/xplat/ && buck test //xplat/caffe2:test_lite_interpreter
Building: finished in 0.8 sec (100%) 327/327 jobs, 0/327 updated
  Total time: 0.9 sec
Testing: finished in 06:59.5 min (85 PASS/0 FAIL)
BUILD SUCCEEDED
RESULTS FOR //xplat/caffe2:test_lite_interpreter
PASS    412.3s 85 Passed   0 Skipped   0 Failed   //xplat/caffe2:test_lite_interpreter
TESTS PASSED
```

Reviewed By: iseeyuan

Differential Revision: D34900498

fbshipit-source-id: 65743076d43a933c5381ec128d0268f22c0a8441
(cherry picked from commit 457c76c7d1df6050b941c56a8198162e2e4a3388)
2022-04-01 15:05:37 +00:00
..
upgrader_models
__init__.py
CMakeLists.txt Nvfuser code bump 030122 (#73627) 2022-03-31 08:18:22 +00:00
README.md
script_module_v4.ptl
script_module_v5.ptl
script_module_v6.ptl
test_add_if_then_else.cpp
test_alias_analysis.cpp
test_argument_spec.cpp
test_autodiff.cpp Clean up profiling mode and profiling executor strategy (#73875) 2022-03-29 18:38:51 +00:00
test_backend.cpp Revert D34455360: Multisect successfully blamed D34455360 for test failures 2022-03-08 23:18:54 +00:00
test_backend_compiler_lib.cpp
test_backend_compiler_preprocess.cpp
test_backend_lib.cpp
test_class_import.cpp
test_class_parser.cpp
test_class_type.cpp
test_cleanup_passes.cpp
test_code_template.cpp
test_concat_opt.cpp
test_constant_pooling.cpp
test_create_autodiff_subgraphs.cpp
test_cs_debug_info_serialization.cpp
test_custom_class.cpp
test_custom_class_registrations.cpp
test_custom_class_registrations.h
test_custom_operators.cpp
test_dce.cpp
test_exception.cpp
test_file_format.cpp
test_flatbuffer.cpp Extending _get_bytecode_version to support flatbuffers format (#75021) 2022-04-01 15:05:37 +00:00
test_fuser.cpp
test_graph_executor.cpp
test_graph_iterator.cpp
test_inliner.cpp
test_interface.cpp
test_interpreter.cpp
test_interpreter_async.pt
test_ir.cpp
test_irparser.cpp
test_jit_logging_levels.cpp
test_jit_type.cpp
test_lite_interpreter.cpp Extending _get_bytecode_version to support flatbuffers format (#75021) 2022-04-01 15:05:37 +00:00
test_lite_interpreter_direct.cpp
test_lite_trainer.cpp [PyTorchEdge] Make _load_parameters() handle flatbuffer inputs (#74580) 2022-03-30 20:39:58 +00:00
test_load_upgraders.cpp
test_memory_dag.cpp
test_misc.cpp Introducing SymInt to Pytorch (for tracing size arithmetic) (master rebase) (#74861) 2022-03-31 21:59:59 +00:00
test_mobile_type_parser.cpp
test_module_api.cpp
test_op_replacement.cpp
test_peephole_optimize.cpp
test_qualified_name.cpp
test_save_load.cpp Create method to map JIT module to (source, constant) and back. (#74119) 2022-03-15 18:30:08 +00:00
test_schema_matching.cpp
test_script_profile.cpp
test_shape_analysis.cpp
test_stack_opt.cpp
test_subgraph_matcher.cpp
test_subgraph_rewriter.cpp
test_subgraph_utils.cpp
test_union.cpp
test_upgrader_utils.cpp
test_utils.cpp
test_utils.h Revert D34455360: Multisect successfully blamed D34455360 for test failures 2022-03-08 23:18:54 +00:00
tests_setup.py
torch_python_test.cpp

JIT C++ Tests

Adding a new test

First, create a new test file. Test files should have be placed in this directory, with a name that starts with test_, like test_foo.cpp.

In general a single test suite

Add your test file to the JIT_TEST_SRCS list in test/cpp/jit/CMakeLists.txt.

A test file may look like:

#include <gtest/gtest.h>

using namespace ::torch::jit

TEST(FooTest, BarBaz) {
   // ...
}

// Append '_CUDA' to the test case name will automatically filter it out if CUDA
// is not compiled.
TEST(FooTest, NeedsAGpu_CUDA) {
   // ...
}

// Similarly, if only one GPU is detected, tests with `_MultiCUDA` at the end
// will not be run.
TEST(FooTest, NeedsMultipleGpus_MultiCUDA) {
   // ...
}

Building and running the tests

The following commands assume you are in PyTorch root.

# ... Build PyTorch from source, e.g.
python setup.py develop
# (re)build just the binary
ninja -C build bin/test_jit
# run tests
build/bin/test_jit --gtest_filter='glob_style_filter*'