pytorch/caffe2/operators/elementwise_ops_utils.cc
Jerry Zhang aebf3b47ae Remove template parameter from Tensor (#9939)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/9939

Pull Request resolved: https://github.com/facebookresearch/weakly-supervised-action-detection/pull/13

Pull Request resolved: https://github.com/pytorch/translate/pull/166

Pull Request resolved: https://github.com/pytorch/pytorch/pull/9125

Closes https://github.com/pytorch/pytorch/pull/9125

Use inheritance for polymorphism, and remove template parameter
This is to change the templating in call sites, the core implementations will change later

Before Caffe2 Tensor class was compile-time fixed to bind to a particular device/context. With this change, we're making it a runtime property (stored inside the tensor), but preserve the same semantics. For example, one has to specify device type in order to create a Tensor - there are no uninitialized tensors. More specifically the changes are:

1. We added an extra argument *DeviceType* to most of the constructors of the tensor, e.g. (Tensor(DeviceType type)),
2. Semantics of constructor Tensor(const Tensor<SrcContext>& src, ContextForCopy* context); is changed, in this constructor, the second context is passed in to enable us to call the templated Copy function, it could be in a different context as source and target previously, now we'll enforce that the context should have same device type as src, if it is provided.
3. To preserve 'get-or-construct' semantics of Blob, we added specialized getter Blob::GetMutableTensor that verifies both that Blob contains a Tensor and that it's of a correct type
4. Specifically, Tensor type is not default-constructible any more (as we don't have unknown device tensors) and thus some of the code handling STL containers needs to change

Note: Some changes are postponed just to keep this diff a bit smaller. Please see `TODO`s.

Reviewed By: ezyang, houseroad

Differential Revision: D9024330

fbshipit-source-id: e0b8295d2dc6ebe2963383ded5af799ad17164ba
2018-07-27 10:56:39 -07:00

112 lines
2.9 KiB
C++

#include "caffe2/operators/elementwise_ops_utils.h"
namespace caffe2 {
namespace elementwise_ops_utils {
std::tuple<size_t, size_t, size_t>
ComputeLegacyBroadcastSizes(const Tensor& A, const Tensor& B, int axis) {
CAFFE_ENFORCE_GE(
A.ndim(),
B.ndim(),
"If you are doing broadcasting, input1 should have "
"a smaller or equal number of dimensions.");
if (axis == -1) {
axis = A.ndim() - B.ndim();
}
CAFFE_ENFORCE(
axis >= 0 && axis <= A.ndim() - B.ndim(),
"Broadcast axis should be in the range of"
"[0, A.ndim() - B.ndim()], but axis = ",
axis);
int b_dim_start = 0;
while (b_dim_start < B.ndim() && B.dim(b_dim_start) == 1) {
++b_dim_start;
}
int b_dim_end = B.ndim() - 1;
while (b_dim_end >= b_dim_start && B.dim(b_dim_end) == 1) {
--b_dim_end;
}
size_t pre = 1, n = 1, post = 1;
for (int i = 0; i < axis + b_dim_start; ++i) {
pre *= A.dim(i);
}
for (int i = b_dim_start; i <= b_dim_end; ++i) {
CAFFE_ENFORCE_EQ(
A.dim(i + axis), B.dim(i), "Broadcast dimension mismatch.");
n *= B.dim(i);
}
for (int i = axis + b_dim_end + 1; i < A.ndim(); ++i) {
post *= A.dim(i);
}
return std::make_tuple(pre, n, post);
}
std::vector<int> ComputeBinaryBroadcastForwardDims(
const std::vector<int>& A_dims,
const std::vector<int>& B_dims) {
const int ndim = std::max(A_dims.size(), B_dims.size());
std::vector<int> C_dims(ndim);
int i = A_dims.size() - 1;
int j = B_dims.size() - 1;
int k = ndim - 1;
for (; i >= 0 && j >= 0; --k) {
const int A_dim = A_dims[i];
const int B_dim = B_dims[j];
CAFFE_ENFORCE(A_dim == B_dim || A_dim == 1 || B_dim == 1);
if (A_dim == 0 || B_dim == 0) {
C_dims[k] = 0;
} else {
C_dims[k] = std::max(A_dims[i], B_dims[j]);
}
--i;
--j;
}
for (; i >= 0; --i) {
C_dims[k--] = A_dims[i];
}
for (; j >= 0; --j) {
C_dims[k--] = B_dims[j];
}
return C_dims;
}
void ComputeBinaryBroadcastBackwardAxes(
const std::vector<int>& A_dims,
const std::vector<int>& B_dims,
std::vector<int>* A_axes,
std::vector<int>* B_axes) {
A_axes->clear();
B_axes->clear();
const int ndim = std::max(A_dims.size(), B_dims.size());
int i = A_dims.size() - 1;
int j = B_dims.size() - 1;
int k = ndim - 1;
for (; i >= 0 && j >= 0; --k) {
CAFFE_ENFORCE(A_dims[i] == B_dims[j] || A_dims[i] == 1 || B_dims[j] == 1);
if (A_dims[i] != B_dims[j]) {
if (A_dims[i] == 1) {
A_axes->push_back(k);
}
if (B_dims[j] == 1) {
B_axes->push_back(k);
}
}
--i;
--j;
}
if (i < 0) {
for (; k >= 0; --k) {
A_axes->push_back(k);
}
} else {
for (; k >= 0; --k) {
B_axes->push_back(k);
}
}
std::reverse(A_axes->begin(), A_axes->end());
std::reverse(B_axes->begin(), B_axes->end());
}
} // namespace elementwise_ops_utils
} // namespace caffe2