pytorch/test/custom_operator/op.cpp
Sebastian Messmer 243298668c Remove confusing torch::jit::RegisterOperators for custom ops (#28229)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28229

We have `torch::RegisterOperators` for custom ops. `torch::jit::RegisterOperators` had a dual state of being able to register custom ops if called one way and being able to register pure JIT ops if called another way.
This is confusing because you end up in different operator libraries depending on which API exactly you're using.

This PR removes the ability for torch::jit::RegisterOperators to register custom ops and forces people to use the new torch::RegisterOperators.

This was already deprecated before but we now remove it.
ghstack-source-id: 92137305

Test Plan: unit tests

Differential Revision: D17981895

fbshipit-source-id: 0af267dfdc3c6a2736740091cf841bac40deff40
2019-10-18 10:46:31 -07:00

34 lines
958 B
C++

#include <torch/script.h>
#include "op.h"
#include <cstddef>
#include <vector>
#include <string>
std::vector<torch::Tensor> custom_op(
torch::Tensor tensor,
double scalar,
int64_t repeat) {
std::vector<torch::Tensor> output;
output.reserve(repeat);
for (int64_t i = 0; i < repeat; ++i) {
output.push_back(tensor * scalar);
}
return output;
}
int64_t custom_op2(std::string s1, std::string s2) {
return s1.compare(s2);
}
static auto registry =
torch::RegisterOperators()
// We parse the schema for the user.
.op("custom::op", &custom_op)
.op("custom::op2", &custom_op2)
// User provided schema. Among other things, allows defaulting values,
// because we cannot infer default values from the signature. It also
// gives arguments meaningful names.
.op("custom::op_with_defaults(Tensor tensor, float scalar = 1, int repeat = 1) -> Tensor[]",
&custom_op);