pytorch/test/cpp/api
Karl Ostmo 4ba28deb6e Unify libtorch and libcaffe2 (#17783)
Summary:
This PR is an intermediate step toward the ultimate goal of eliminating "caffe2" in favor of "torch".  This PR moves all of the files that had constituted "libtorch.so" into the "libcaffe2.so" library, and wraps "libcaffe2.so" with a shell library named "libtorch.so".  This means that, for now, `caffe2/CMakeLists.txt` becomes a lot bigger, and `torch/CMakeLists.txt` becomes smaller.

The torch Python bindings (`torch_python.so`) still remain in `torch/CMakeLists.txt`.

The follow-up to this PR will rename references to `caffe2` to `torch`, and flatten the shell into one library.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17783

Differential Revision: D15284178

Pulled By: kostmo

fbshipit-source-id: a08387d735ae20652527ced4e69fd75b8ff88b05
2019-05-10 09:50:53 -07:00
..
any.cpp
CMakeLists.txt Unify libtorch and libcaffe2 (#17783) 2019-05-10 09:50:53 -07:00
dataloader.cpp Refactor ChunkDataReader API + fix missing headers (#19485) 2019-05-08 22:20:19 -07:00
expanding-array.cpp
init.cpp Fix torch::nn::init::orthogonal_ with CNNs (#18915) 2019-04-09 10:39:15 -07:00
init_baseline.h Kaiming Initialization (#14718) 2019-02-15 14:58:22 -08:00
init_baseline.py Kaiming Initialization (#14718) 2019-02-15 14:58:22 -08:00
integration.cpp
jit.cpp Revert D15156384: Dict 2019-05-10 06:11:25 -07:00
memory.cpp
misc.cpp Kaiming Initialization (#14718) 2019-02-15 14:58:22 -08:00
module.cpp Apply modernize-use-override - 2/2 2019-02-13 21:01:28 -08:00
modules.cpp Rename BatchNorm running_variance to running_var (#17371) 2019-02-22 08:00:25 -08:00
optim.cpp
optim_baseline.h
optim_baseline.py
ordered_dict.cpp
parallel.cpp
README.md
rnn.cpp
sequential.cpp Add named submodule support to nn::Sequential (#17552) 2019-03-29 13:06:29 -07:00
serialize.cpp Ignore nn::Functional submodules in nn::Module serialization (#19740) 2019-04-26 12:47:23 -07:00
static.cpp
support.h
tensor.cpp
tensor_cuda.cpp push magma init into lazyInitCUDA (#18527) 2019-04-03 12:47:34 -07:00
tensor_options.cpp Add ScalarType argument to Type::options() (#19270) 2019-04-21 21:16:07 -07:00
tensor_options_cuda.cpp Add ScalarType argument to Type::options() (#19270) 2019-04-21 21:16:07 -07:00
torch_include.cpp Use torch::get/set_num_threads without additional includes beyond torch/torch.h (#20176) 2019-05-09 06:08:27 -07:00

C++ Frontend Tests

In this folder live the tests for PyTorch's C++ Frontend. They use the GoogleTest test framework.

CUDA Tests

To make a test runnable only on platforms with CUDA, you should suffix your test with _CUDA, e.g.

TEST(MyTestSuite, MyTestCase_CUDA) { }

To make it runnable only on platforms with at least two CUDA machines, suffix it with _MultiCUDA instead of _CUDA, e.g.

TEST(MyTestSuite, MyTestCase_MultiCUDA) { }

There is logic in main.cpp that detects the availability and number of CUDA devices and supplies the appropriate negative filters to GoogleTest.

Integration Tests

Integration tests use the MNIST dataset. You must download it by running the following command from the PyTorch root folder:

$ python tools/download_mnist.py -d test/cpp/api/mnist

The required paths will be referenced as test/cpp/api/mnist/... in the test code, so you must run the integration tests from the PyTorch root folder.