pytorch/test/cpp/jit
Scott Wolchok bef460a803 [PyTorch] Return raw ptr from ThreadLocalDebugInfo::get() (#47796)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47796

`ThreadLocalDebugInfo::get()` is a hot function. For example, it is called by `DefaultCPUAllocator::allocate()`. Most callers do not even bother to keep the returned `shared_ptr` around, proving that they have no lifetime issues currently. For the rest, it appears that the only way that the returned pointer could become invalid is if they then called a function that swapped out `ThreadLocalDebugInfo` using `ThreadLocalStateGuard`. There are very few such paths, and it doesn't look like any current callers of `ThreadLocalDebugInfo::get()` needed a `shared_ptr` at all.
ghstack-source-id: 116979577

Test Plan:
1) reviewers to double-check audit of safety
2) run framework overhead benchmarks

Reviewed By: dzhulgakov

Differential Revision: D24902978

fbshipit-source-id: d684737cc2568534cac7cd3fb8d623b971c2fd28
2020-11-18 20:37:17 -08:00
..
__init__.py
CMakeLists.txt
README.md
test_alias_analysis.cpp
test_argument_spec.cpp
test_autodiff.cpp Add inputs argument to autograd.backward() (#46855) 2020-11-02 14:32:38 -08:00
test_backend.cpp
test_class_import.cpp
test_class_parser.cpp
test_class_type.cpp
test_cleanup_passes.cpp
test_code_template.cpp
test_constant_pooling.cpp
test_create_autodiff_subgraphs.cpp
test_custom_class.cpp [TorchScript] Support user defined classes as constants (#5062) 2020-11-16 20:52:02 -08:00
test_custom_class_registrations.cpp [TorchBind] Support using lambda function as TorchBind constructor (#47819) 2020-11-12 09:29:34 -08:00
test_custom_class_registrations.h
test_custom_operators.cpp
test_dce.cpp
test_fuser.cpp
test_gpu.cpp [nvFuser] Switching to CudaFusionGuard from BailOut for nvfuser - update 2 (#46452) 2020-10-19 15:44:31 -07:00
test_graph_executor.cpp [DI] Allow explicit taskLauncher for torchscript interpreter (#46865) 2020-11-04 17:07:55 -08:00
test_inliner.cpp
test_interface.cpp
test_interpreter.cpp [DI] Allow explicit taskLauncher for torchscript interpreter (#46865) 2020-11-04 17:07:55 -08:00
test_interpreter_async.pt [DI] Allow explicit taskLauncher for torchscript interpreter (#46865) 2020-11-04 17:07:55 -08:00
test_ir.cpp
test_irparser.cpp
test_jit_type.cpp
test_lite_interpreter.cpp Support extra files in _load_for_mobile (#47425) 2020-11-06 20:26:54 -08:00
test_lite_trainer.cpp
test_memory_dag.cpp
test_misc.cpp [PyTorch] Return raw ptr from ThreadLocalDebugInfo::get() (#47796) 2020-11-18 20:37:17 -08:00
test_mobile_type_parser.cpp
test_module_api.cpp
test_peephole_optimize.cpp
test_qualified_name.cpp
test_save_load.cpp
test_schema_matching.cpp
test_subgraph_matcher.cpp
test_subgraph_rewriter.cpp
test_subgraph_utils.cpp [JIT] SubgraphUtils: add a function for generating a string name for a given graph. (#47253) 2020-11-03 16:36:41 -08:00
test_utils.cpp
test_utils.h
tests_setup.py
torch_python_test.cpp

JIT C++ Tests

Adding a new test

First, create a new test file. Test files should have be placed in this directory, with a name that starts with test_, like test_foo.cpp.

In general a single test suite

Add your test file to the JIT_TEST_SRCS list in test/cpp/jit/CMakeLists.txt.

A test file may look like:

#include <gtest/gtest.h>

using namespace ::torch::jit

TEST(FooTest, BarBaz) {
   // ...
}

// Append '_CUDA' to the test case name will automatically filter it out if CUDA
// is not compiled.
TEST(FooTest, NeedsAGpu_CUDA) {
   // ...
}

// Similarly, if only one GPU is detected, tests with `_MultiCUDA` at the end
// will not be run.
TEST(FooTest, NeedsMultipleGpus_MultiCUDA) {
   // ...
}

Building and running the tests

The following commands assume you are in PyTorch root.

# ... Build PyTorch from source, e.g.
python setup.py develop
# (re)build just the binary
ninja -C build bin/test_jit
# run tests
build/bin/test_jit --gtest_filter='glob_style_filter*'