pytorch/test/cpp/jit
jjsjann123 c11b301bcd [NVFUSER] refactor nvfuser build (#89621)
This PR is the first step towards refactors the build for nvfuser in order to have the coegen being a standalone library.

Contents inside this PR:
1. nvfuser code base has been moved to `./nvfuser`, from `./torch/csrc/jit/codegen/cuda/`, except for registration code for integration (interface.h/interface.cpp)
2. splits the build system so nvfuser is generating its own `.so` files. Currently there are:
    - `libnvfuser_codegen.so`, which contains the integration, codegen and runtime system of nvfuser
    - `nvfuser.so`, which is nvfuser's python API via pybind. Python frontend is now exposed via `nvfuser._C.XXX` instead of `torch._C._nvfuser`
3. nvfuser cpp tests is currently being compiled into `nvfuser_tests`
4. cmake is refactored so that:
    - nvfuser now has its own `CMakeLists.txt`, which is under `torch/csrc/jit/codegen/cuda/`.
    - nvfuser backend code is not compiled inside `libtorch_cuda_xxx` any more
    - nvfuser is added as a subdirectory under `./CMakeLists.txt` at the very end after torch is built.
    - since nvfuser has dependency on torch, the registration of nvfuser at runtime is done via dlopen (`at::DynamicLibrary`). This avoids circular dependency in cmake, which will be a nightmare to handle. For details, look at `torch/csrc/jit/codegen/cuda/interface.cpp::LoadingNvfuserLibrary`

Future work that's scoped in following PR:
- Currently since nvfuser codegen has dependency on torch, we need to refactor that out so we can move nvfuser into a submodule and not rely on dlopen to load the library. @malfet
- Since we moved nvfuser into a cmake build, we effectively disabled bazel build for nvfuser. This could impact internal workload at Meta, so we need to put support back. cc'ing @vors

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89621
Approved by: https://github.com/davidberard98
2023-01-26 02:50:44 +00:00
..
upgrader_models
__init__.py
CMakeLists.txt [NVFUSER] refactor nvfuser build (#89621) 2023-01-26 02:50:44 +00:00
README.md
script_module_v4.ptl
script_module_v5.ptl
script_module_v6.ptl
source_range_test.cpp
test_add_if_then_else.cpp
test_alias_analysis.cpp
test_argument_spec.cpp
test_autodiff.cpp
test_backend.cpp
test_backend_compiler_lib.cpp
test_backend_compiler_preprocess.cpp
test_backend_lib.cpp
test_class_import.cpp
test_class_parser.cpp
test_class_type.cpp
test_cleanup_passes.cpp
test_code_template.cpp
test_concat_opt.cpp
test_constant_pooling.cpp
test_create_autodiff_subgraphs.cpp
test_cs_debug_info_serialization.cpp
test_custom_class.cpp
test_custom_class_registrations.cpp Fix typos in messages under test (#89121) 2022-11-17 01:55:03 +00:00
test_custom_class_registrations.h
test_custom_operators.cpp
test_dce.cpp
test_exception.cpp
test_file_format.cpp
test_flatbuffer.cpp [codev] Make backport work with flatbuffer models (#88127) 2022-11-01 16:11:30 +00:00
test_fuser.cpp
test_graph_executor.cpp [codemod][llvm15] LLVM-15 fixes for caffe2/test/cpp/jit/test_graph_executor.cpp (#89936) 2022-12-01 03:30:31 +00:00
test_graph_iterator.cpp
test_inliner.cpp
test_interface.cpp
test_interpreter.cpp
test_interpreter_async.pt
test_ir.cpp
test_irparser.cpp
test_jit_logging_levels.cpp Fix CheckOutputStreamSetting on JitLoggingTest as it failed if logging wasn't enabled. (#82722) 2022-11-23 22:46:29 +00:00
test_jit_type.cpp
test_lite_interpreter.cpp Clean up dependancy for flatbuffer_loader (#86041) 2022-12-08 03:48:04 +00:00
test_lite_interpreter_direct.cpp
test_lite_trainer.cpp Clean up dependancy for flatbuffer_loader (#86041) 2022-12-08 03:48:04 +00:00
test_load_upgraders.cpp
test_memory_dag.cpp
test_misc.cpp Implement SymBool (#92149) 2023-01-21 02:21:56 +00:00
test_mobile_type_parser.cpp
test_module_api.cpp [codemod][llvm15] LLVM-15 fixes for caffe2/test/cpp/jit/test_module_api.cpp (#89938) 2022-12-04 12:50:14 +00:00
test_op_replacement.cpp
test_peephole_optimize.cpp
test_qualified_name.cpp
test_save_load.cpp Add an option to skip loading of debug traces (#91430) 2022-12-29 22:53:17 +00:00
test_schema_info.cpp
test_schema_matching.cpp
test_script_profile.cpp
test_shape_analysis.cpp
test_stack_opt.cpp
test_subgraph_matcher.cpp
test_subgraph_rewriter.cpp
test_subgraph_utils.cpp
test_union.cpp
test_upgrader_utils.cpp
test_utils.cpp
test_utils.h
tests_setup.py
torch_python_test.cpp

JIT C++ Tests

Adding a new test

First, create a new test file. Test files should have be placed in this directory, with a name that starts with test_, like test_foo.cpp.

In general a single test suite

Add your test file to the JIT_TEST_SRCS list in test/cpp/jit/CMakeLists.txt.

A test file may look like:

#include <gtest/gtest.h>

using namespace ::torch::jit

TEST(FooTest, BarBaz) {
   // ...
}

// Append '_CUDA' to the test case name will automatically filter it out if CUDA
// is not compiled.
TEST(FooTest, NeedsAGpu_CUDA) {
   // ...
}

// Similarly, if only one GPU is detected, tests with `_MultiCUDA` at the end
// will not be run.
TEST(FooTest, NeedsMultipleGpus_MultiCUDA) {
   // ...
}

Building and running the tests

The following commands assume you are in PyTorch root.

# ... Build PyTorch from source, e.g.
python setup.py develop
# (re)build just the binary
ninja -C build bin/test_jit
# run tests
build/bin/test_jit --gtest_filter='glob_style_filter*'