pytorch/test/cpp/api
Peter Goldsborough d863391871 nn::Module::as (#9149)
Summary:
Added a way to `dynamic_cast` an `nn::Module` and get a pointer to it. `nn::Module::is<T>` just checked if the return value of the `dynamic_cast` was nullptr, so I got rid of `is<T>` since it's equivalent to `as<T> != nullptr`(or just `as<T>` due to boolean conversion).

We're now at

```
if (auto* conv = module.as<nn::Conv2d>()) {
  conv->weight.data().normal_(0.0, 0.02);
} else if (auto* bn = module.as<nn::BatchNorm>()) {
  bn->weight.data().normal_(1.0, 0.02);
  bn->bias.data().fill_(0);
}
```

ezyang apaszke ebetica
Closes https://github.com/pytorch/pytorch/pull/9149

Differential Revision: D8735954

Pulled By: goldsborough

fbshipit-source-id: e2b8f6f0cea16a621f8bc0807a33cc7651d25154
2018-07-06 11:10:29 -07:00
..
any.cpp Convert at::Tensor to torch::Tensor in AnyModule (#8968) 2018-06-28 06:40:48 -07:00
cursor.cpp Set random seed at the start of C++ tests (#8903) 2018-06-27 20:09:46 -07:00
integration.cpp Move _cudnn_init_dropout_state to TensorOptions and enable cuDNN dropout in C++ API RNNs (#9012) 2018-06-29 17:25:23 -07:00
main.cpp Split up detail.h (#7836) 2018-05-30 08:55:34 -07:00
misc.cpp Set random seed at the start of C++ tests (#8903) 2018-06-27 20:09:46 -07:00
module.cpp nn::Module::as (#9149) 2018-07-06 11:10:29 -07:00
modules.cpp Move _cudnn_init_dropout_state to TensorOptions and enable cuDNN dropout in C++ API RNNs (#9012) 2018-06-29 17:25:23 -07:00
optim.cpp Set random seed at the start of C++ tests (#8903) 2018-06-27 20:09:46 -07:00
optim_baseline.h Use torch:: instead of at:: (#8911) 2018-06-27 14:42:01 -07:00
optim_baseline.py Use torch:: instead of at:: (#8911) 2018-06-27 14:42:01 -07:00
README.md
rnn.cpp Move _cudnn_init_dropout_state to TensorOptions and enable cuDNN dropout in C++ API RNNs (#9012) 2018-06-29 17:25:23 -07:00
sequential.cpp nn::Module::as (#9149) 2018-07-06 11:10:29 -07:00
serialization.cpp Move _cudnn_init_dropout_state to TensorOptions and enable cuDNN dropout in C++ API RNNs (#9012) 2018-06-29 17:25:23 -07:00
static.cpp [C++ API] Make pImpl easy to use in modules to enable happy reference semantics (#8347) 2018-06-18 19:45:53 -07:00
tensor.cpp [C++ API] Bag of fixes (#8843) 2018-06-25 21:11:49 -07:00
tensor_cuda.cpp Make at::tensor faster (#8709) 2018-06-20 14:46:58 -07:00
tensor_options.cpp Created DefaultTensorOptions in ATen (#8647) 2018-06-24 21:15:09 -07:00
tensor_options_cuda.cpp Created DefaultTensorOptions in ATen (#8647) 2018-06-24 21:15:09 -07:00
util.h [C++ API] Better forward methods (#8739) 2018-06-26 13:23:16 -07:00

C++ API Tests

In this folder live the tests for PyTorch's C++ API (formerly known as autogradpp). They use the Catch2 test framework.

CUDA Tests

The way we handle CUDA tests is by separating them into a separate TEST_CASE (e.g. we have optim and optim_cuda test cases in optim.cpp), and giving them the [cuda] tag. Then, inside main.cpp we detect at runtime whether CUDA is available. If not, we disable these CUDA tests by appending ~[cuda] to the test specifications. The ~ disables the tag.

One annoying aspect is that Catch only allows filtering on test cases and not sections. Ideally, one could have a section like LSTM inside the RNN test case, and give this section a [cuda] tag to only run it when CUDA is available. Instead, we have to create a whole separate RNN_cuda test case and put all these CUDA sections in there.

Integration Tests

Integration tests use the MNIST dataset. You must download it by running the following command from the PyTorch root folder:

$ python tools/download_mnist.py -d test/cpp/api/mnist