pytorch/test/cpp/api
soulitzer f3aca45a16 [BE][autograd Function] Raise an error if input is returned as-is and saved for forward or backward in setup_context (#97212)
Fixes https://github.com/pytorch/pytorch/issues/96887

We error out in BOTH the case when graph is created and when it is not created.

Still bc-breaking, but not as severe because we are limiting to the case where someone uses setup_context.

This makes setup_context and non-setup_context versions diverge in their behavior
- With the non-setup_context version, saved variables are assumed to have the grad_fn of the inputs.
- But now with the setup_context version, we produce an error for this case.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/97212
Approved by: https://github.com/zou3519
2023-03-28 03:14:32 +00:00
..
any.cpp
autograd.cpp [BE][autograd Function] Raise an error if input is returned as-is and saved for forward or backward in setup_context (#97212) 2023-03-28 03:14:32 +00:00
CMakeLists.txt Delete torch::deploy from pytorch core (#85953) 2022-10-06 07:20:16 +00:00
dataloader.cpp set -Wsuggest-override for builds (#89852) 2022-12-19 22:08:47 +00:00
dispatch.cpp
enum.cpp
expanding-array.cpp
fft.cpp
functional.cpp [C++ API] Added missing antialiasing path in interpolation C++ api (#84599) 2022-09-13 03:54:07 +00:00
grad_mode.cpp
inference_mode.cpp Add Warning class and refactor C++ warnings to use it (#84101) 2022-10-18 20:02:42 +00:00
init.cpp
init_baseline.h
init_baseline.py
integration.cpp
jit.cpp
memory.cpp
meta_tensor.cpp
misc.cpp
module.cpp [nn] zero_grad() set_to_none default True (#92731) 2023-01-26 01:04:28 +00:00
moduledict.cpp [fix] nn c++ : segfault in modulelist and moduledict (#93074) 2023-01-27 12:20:19 +00:00
modulelist.cpp [fix] nn c++ : segfault in modulelist and moduledict (#93074) 2023-01-27 12:20:19 +00:00
modules.cpp Adding nn.ZeroPad1d and nn.ZeroPad3d (#96295) 2023-03-10 03:51:41 +00:00
namespace.cpp
nested.cpp Add python nested_tensor and as_nested_tensor constructors in torch.nested (#85593) 2022-09-28 20:15:02 +00:00
nn_utils.cpp Changing the use from ASSERT_EQ to ASSERT_FLOAT_EQ on nn_utils test. (#83693) 2022-11-15 04:10:52 +00:00
operations.cpp
optim.cpp [nn] add set_to_none flag for C++ optim endpoint (#92989) 2023-01-26 04:16:52 +00:00
optim_baseline.h
optim_baseline.py
ordered_dict.cpp
parallel.cpp
parallel_benchmark.cpp
parameterdict.cpp
parameterlist.cpp
README.md
rnn.cpp
sequential.cpp
serialize.cpp Error on ZeroTensor serialization (#88803) 2022-11-11 08:51:29 +00:00
special.cpp
static.cpp Make test_api compile on DEBUG mode with some compiler versions (#86092) 2022-10-03 13:52:32 +00:00
support.cpp
support.h Add Warning class and refactor C++ warnings to use it (#84101) 2022-10-18 20:02:42 +00:00
tensor.cpp [autograd] disable backward/grad for complex scalar output (#92753) 2023-02-23 11:38:27 +00:00
tensor_cuda.cpp
tensor_flatten.cpp
tensor_indexing.cpp
tensor_options.cpp
tensor_options_cuda.cpp
torch_include.cpp
transformer.cpp

C++ Frontend Tests

In this folder live the tests for PyTorch's C++ Frontend. They use the GoogleTest test framework.

CUDA Tests

To make a test runnable only on platforms with CUDA, you should suffix your test with _CUDA, e.g.

TEST(MyTestSuite, MyTestCase_CUDA) { }

To make it runnable only on platforms with at least two CUDA machines, suffix it with _MultiCUDA instead of _CUDA, e.g.

TEST(MyTestSuite, MyTestCase_MultiCUDA) { }

There is logic in main.cpp that detects the availability and number of CUDA devices and supplies the appropriate negative filters to GoogleTest.

Integration Tests

Integration tests use the MNIST dataset. You must download it by running the following command from the PyTorch root folder:

$ python tools/download_mnist.py -d test/cpp/api/mnist

The required paths will be referenced as test/cpp/api/mnist/... in the test code, so you must run the integration tests from the PyTorch root folder.