mirror of
https://github.com/saymrwulf/pytorch.git
synced 2026-05-14 20:57:59 +00:00
Summary: In the [official documentation](https://pytorch.org/tutorials/advanced/torch_script_custom_ops.html), it is recommended to use `TORCH_LIBRARY` to register ops for TorchScript. However, that code is never tested with CUDA extension and is actually broken (https://github.com/pytorch/pytorch/issues/47493). This PR adds a test for it. It will not pass CI now, but it will pass when the issue https://github.com/pytorch/pytorch/issues/47493 is fixed. Pull Request resolved: https://github.com/pytorch/pytorch/pull/47524 Reviewed By: zou3519 Differential Revision: D24991839 Pulled By: ezyang fbshipit-source-id: 037196621c7ff9a6e7905efc1097ff97906a0b1c
9 lines
202 B
Text
9 lines
202 B
Text
#include <torch/extension.h>
|
|
|
|
bool logical_and(bool a, bool b) { return a && b; }
|
|
|
|
TORCH_LIBRARY(torch_library, m) {
|
|
m.def("logical_and", &logical_and);
|
|
}
|
|
|
|
PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {}
|