pytorch/test/cpp/jit
Richard Barnes ed327876f5 [codemod] c10:optional -> std::optional (#126135)
Generated by running the following from PyTorch root:
```
find . -regex ".*\.\(cpp\|h\|cu\|hpp\|cc\|cxx\)$" | grep -v "build/" | xargs -n 50 -P 4 perl -pi -e 's/c10::optional/std::optional/'
```

`c10::optional` is just an alias for `std::optional`. This removes usages of that alias in preparation for eliminating it entirely.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/126135
Approved by: https://github.com/Skylion007, https://github.com/malfet, https://github.com/albanD, https://github.com/aaronenyeshi
2024-05-14 19:35:51 +00:00
..
upgrader_models
__init__.py
CMakeLists.txt Remove TMP_LIBKINETO_NANOSECOND flag from Compilation (#124734) 2024-04-26 06:57:03 +00:00
README.md
script_module_v4.ptl
script_module_v5.ptl
script_module_v6.ptl
source_range_test.cpp
test_add_if_then_else.cpp
test_alias_analysis.cpp [jit] AliasDB type hash - don't always return 0 (#121874) 2024-03-14 23:16:08 +00:00
test_argument_spec.cpp [codemod] c10:optional -> std::optional (#126135) 2024-05-14 19:35:51 +00:00
test_autodiff.cpp
test_backend.cpp
test_backend_compiler_lib.cpp
test_backend_compiler_preprocess.cpp
test_backend_lib.cpp
test_class_import.cpp
test_class_parser.cpp
test_class_type.cpp
test_cleanup_passes.cpp
test_code_template.cpp
test_concat_opt.cpp
test_constant_pooling.cpp
test_create_autodiff_subgraphs.cpp
test_cs_debug_info_serialization.cpp
test_custom_class.cpp
test_custom_class_registrations.cpp [codemod] c10:optional -> std::optional (#126135) 2024-05-14 19:35:51 +00:00
test_custom_class_registrations.h
test_custom_operators.cpp
test_dce.cpp
test_exception.cpp [codemod] c10:optional -> std::optional (#126135) 2024-05-14 19:35:51 +00:00
test_file_format.cpp
test_flatbuffer.cpp
test_fuser.cpp
test_graph_executor.cpp
test_graph_iterator.cpp
test_inliner.cpp
test_interface.cpp
test_interpreter.cpp
test_interpreter_async.pt
test_ir.cpp [codemod] c10:optional -> std::optional (#126135) 2024-05-14 19:35:51 +00:00
test_irparser.cpp
test_jit_logging_levels.cpp
test_jit_type.cpp [codemod] c10:optional -> std::optional (#126135) 2024-05-14 19:35:51 +00:00
test_lite_interpreter.cpp
test_lite_interpreter_direct.cpp
test_lite_trainer.cpp
test_load_upgraders.cpp
test_memory_dag.cpp Fix C++20 build (#112333) 2024-02-13 05:10:19 +00:00
test_misc.cpp [codemod] c10:optional -> std::optional (#126135) 2024-05-14 19:35:51 +00:00
test_mobile_type_parser.cpp
test_module_api.cpp
test_op_replacement.cpp
test_peephole_optimize.cpp
test_qualified_name.cpp
test_save_load.cpp Driver folder check (#117548) 2024-05-03 09:10:11 +00:00
test_schema_info.cpp
test_schema_matching.cpp
test_script_profile.cpp
test_shape_analysis.cpp [codemod] c10:optional -> std::optional (#126135) 2024-05-14 19:35:51 +00:00
test_stack_opt.cpp
test_subgraph_matcher.cpp
test_subgraph_rewriter.cpp
test_subgraph_utils.cpp
test_union.cpp
test_upgrader_utils.cpp
test_utils.cpp
test_utils.h
tests_setup.py [BE] enable ruff rule RSE and remove useless parentheses in raise statements (#124261) 2024-04-17 19:29:34 +00:00
torch_python_test.cpp

JIT C++ Tests

Adding a new test

First, create a new test file. Test files should have be placed in this directory, with a name that starts with test_, like test_foo.cpp.

In general a single test suite

Add your test file to the JIT_TEST_SRCS list in test/cpp/jit/CMakeLists.txt.

A test file may look like:

#include <gtest/gtest.h>

using namespace ::torch::jit

TEST(FooTest, BarBaz) {
   // ...
}

// Append '_CUDA' to the test case name will automatically filter it out if CUDA
// is not compiled.
TEST(FooTest, NeedsAGpu_CUDA) {
   // ...
}

// Similarly, if only one GPU is detected, tests with `_MultiCUDA` at the end
// will not be run.
TEST(FooTest, NeedsMultipleGpus_MultiCUDA) {
   // ...
}

Building and running the tests

The following commands assume you are in PyTorch root.

# ... Build PyTorch from source, e.g.
python setup.py develop
# (re)build just the binary
ninja -C build bin/test_jit
# run tests
build/bin/test_jit --gtest_filter='glob_style_filter*'