pytorch/test/cpp_extensions/extension.cpp

Ignoring revisions in .git-blame-ignore-revs. Click here to bypass and see the normal blame view.

56 lines
1.8 KiB
C++
Raw Normal View History

Unify C++ API with C++ extensions (#11510) Summary: Currently the C++ API and C++ extensions are effectively two different, entirely orthogonal code paths. This PR unifies the C++ API with the C++ extension API by adding an element of Python binding support to the C++ API. This means the `torch/torch.h` included by C++ extensions, which currently routes to `torch/csrc/torch.h`, can now be rerouted to `torch/csrc/api/include/torch/torch.h` -- i.e. the main C++ API header. This header then includes Python binding support conditioned on a define (`TORCH_WITH_PYTHON_BINDINGS`), *which is only passed when building a C++ extension*. Currently stacked on top of https://github.com/pytorch/pytorch/pull/11498 Why is this useful? 1. One less codepath. In particular, there has been trouble again and again due to the two `torch/torch.h` header files and ambiguity when both ended up in the include path. This is now fixed. 2. I have found that it is quite common to want to bind a C++ API module back into Python. This could be for simple experimentation, or to have your training loop in Python but your models in C++. This PR makes this easier by adding pybind11 support to the C++ API. 3. The C++ extension API simply becomes richer by gaining access to the C++ API headers. soumith ezyang apaszke Pull Request resolved: https://github.com/pytorch/pytorch/pull/11510 Reviewed By: ezyang Differential Revision: D9998835 Pulled By: goldsborough fbshipit-source-id: 7a94b44a9d7e0377b7f1cfc99ba2060874d51535
2018-09-24 21:28:54 +00:00
#include <torch/extension.h>
// test include_dirs in setuptools.setup with relative path
#include <tmp.h>
torch::Tensor sigmoid_add(torch::Tensor x, torch::Tensor y) {
return x.sigmoid() + y.sigmoid();
}
struct MatrixMultiplier {
MatrixMultiplier(int A, int B) {
Restructure torch/torch.h and extension.h (#13482) Summary: This PR restructures the public-facing C++ headers in a backwards compatible way. The problem right now is that the C++ extension header `torch/extension.h` does not include the C++ frontend headers from `torch/torch.h`. However, those C++ frontend headers can be convenient. Further, including the C++ frontend main header `torch/torch.h` in a C++ extension currently raises a warning because we want to move people away from exclusively including `torch/torch.h` in extensions (which was the correct thing 6 months ago), since that *used* to be the main C++ extension header but is now the main C++ frontend header. In short: it should be possible to include the C++ frontend functionality from `torch/torch.h`, but without including that header directly because it's deprecated for extensions. For clarification: why is `torch/torch.h` deprecated for extensions? Because for extensions we need to include Python stuff, but for the C++ frontend we don't want this Python stuff. For now the python stuff is included in `torch/torch.h` whenever the header is used from a C++ extension (enabled by a macro passed by `cpp_extensions.py`) to not break existing users, but this should change in the future. The overall fix is simple: 1. C++ frontend sub-headers move from `torch/torch.h` into `torch/all.h`. 2. `torch/all.h` is included in: 1. `torch/torch.h`, as is. 2. `torch/extensions.h`, to now also give C++ extension users this functionality. With the next release we can then: 1. Remove the Python includes from `torch/torch.h` 2. Move C++-only sub-headers from `all.h` back into `torch.h` 3. Make `extension.h` include `torch.h` and `Python.h` This will then break old C++ extensions that include `torch/torch.h`, since the correct header for C++ extensions is `torch/extension.h`. I've also gone ahead and deprecated `torch::CPU` et al. since those are long due to die. ezyang soumith apaszke fmassa Pull Request resolved: https://github.com/pytorch/pytorch/pull/13482 Differential Revision: D12924999 Pulled By: goldsborough fbshipit-source-id: 5bb7bdc005fcb7b525195b769065176514efad8a
2018-11-06 00:44:45 +00:00
tensor_ =
torch::ones({A, B}, torch::dtype(torch::kFloat64).requires_grad(true));
}
torch::Tensor forward(torch::Tensor weights) {
return tensor_.mm(weights);
}
torch::Tensor get() const {
return tensor_;
}
private:
torch::Tensor tensor_;
};
bool function_taking_optional(std::optional<torch::Tensor> tensor) {
return tensor.has_value();
}
torch::Tensor random_tensor() {
return torch::randn({1});
}
PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {
m.def("sigmoid_add", &sigmoid_add, "sigmoid(x) + sigmoid(y)");
m.def(
"function_taking_optional",
&function_taking_optional,
"function_taking_optional");
py::class_<MatrixMultiplier>(m, "MatrixMultiplier")
.def(py::init<int, int>())
.def("forward", &MatrixMultiplier::forward)
.def("get", &MatrixMultiplier::get);
m.def("get_complex", []() { return c10::complex<double>(1.0, 2.0); });
m.def("get_device", []() { return at::device_of(random_tensor()).value(); });
m.def("get_generator", []() { return at::detail::getDefaultCPUGenerator(); });
m.def("get_intarrayref", []() { return at::IntArrayRef({1, 2, 3}); });
m.def("get_memory_format", []() { return c10::get_contiguous_memory_format(); });
m.def("get_storage", []() { return random_tensor().storage(); });
m.def("get_symfloat", []() { return c10::SymFloat(1.0); });
m.def("get_symint", []() { return c10::SymInt(1); });
m.def("get_symintarrayref", []() { return at::SymIntArrayRef({1, 2, 3}); });
m.def("get_tensor", []() { return random_tensor(); });
}