mirror of
https://github.com/saymrwulf/pytorch.git
synced 2026-05-15 21:00:47 +00:00
Fixes #81690 TODO: * [x] C++ Unpickler Fix (locally tested pickled in Python and unpickled in C++) * [x] C++ Pickler Fix (locally tested pickled in C++ and unpickled in Python) * [x] Do quant_tensor, sparse_tensor, etc require similar changes? (Sparse and Quant don't need this) * [x] Add Comments * [x] How to make sure C++ and Python are in sync? (Functions in `pickler.h` help in getting and setting Tensor Metadata (math-bits for now) on a tensor. They are the only place which should handle this.) Notes: Quant Tensor don't support complex dtypes and for float they segfault with `_neg_view` : https://github.com/pytorch/pytorch/issues/88484 Sparse Tensor: ```python >>> a = torch.tensor([[0, 2.], [3j, 0]]).to_sparse() >>> a.conj().is_conj() False >>> a._neg_view() Traceback (most recent call last): File "<stdin>", line 1, in <module> NotImplementedError: Cannot access storage of SparseTensorImpl ``` Pull Request resolved: https://github.com/pytorch/pytorch/pull/88182 Approved by: https://github.com/ezyang, https://github.com/anjali411 |
||
|---|---|---|
| .. | ||
| any.cpp | ||
| autograd.cpp | ||
| CMakeLists.txt | ||
| dataloader.cpp | ||
| dispatch.cpp | ||
| enum.cpp | ||
| expanding-array.cpp | ||
| fft.cpp | ||
| functional.cpp | ||
| grad_mode.cpp | ||
| inference_mode.cpp | ||
| init.cpp | ||
| init_baseline.h | ||
| init_baseline.py | ||
| integration.cpp | ||
| jit.cpp | ||
| memory.cpp | ||
| meta_tensor.cpp | ||
| misc.cpp | ||
| module.cpp | ||
| moduledict.cpp | ||
| modulelist.cpp | ||
| modules.cpp | ||
| namespace.cpp | ||
| nested.cpp | ||
| nn_utils.cpp | ||
| operations.cpp | ||
| optim.cpp | ||
| optim_baseline.h | ||
| optim_baseline.py | ||
| ordered_dict.cpp | ||
| parallel.cpp | ||
| parallel_benchmark.cpp | ||
| parameterdict.cpp | ||
| parameterlist.cpp | ||
| README.md | ||
| rnn.cpp | ||
| sequential.cpp | ||
| serialize.cpp | ||
| special.cpp | ||
| static.cpp | ||
| support.cpp | ||
| support.h | ||
| tensor.cpp | ||
| tensor_cuda.cpp | ||
| tensor_flatten.cpp | ||
| tensor_indexing.cpp | ||
| tensor_options.cpp | ||
| tensor_options_cuda.cpp | ||
| torch_include.cpp | ||
| transformer.cpp | ||
C++ Frontend Tests
In this folder live the tests for PyTorch's C++ Frontend. They use the GoogleTest test framework.
CUDA Tests
To make a test runnable only on platforms with CUDA, you should suffix your
test with _CUDA, e.g.
TEST(MyTestSuite, MyTestCase_CUDA) { }
To make it runnable only on platforms with at least two CUDA machines, suffix
it with _MultiCUDA instead of _CUDA, e.g.
TEST(MyTestSuite, MyTestCase_MultiCUDA) { }
There is logic in main.cpp that detects the availability and number of CUDA
devices and supplies the appropriate negative filters to GoogleTest.
Integration Tests
Integration tests use the MNIST dataset. You must download it by running the following command from the PyTorch root folder:
$ python tools/download_mnist.py -d test/cpp/api/mnist
The required paths will be referenced as test/cpp/api/mnist/... in the test
code, so you must run the integration tests from the PyTorch root folder.