[英]How to perform certain operations on eigen tensor?
I need to perform certain operations on eigen tensor.我需要对特征张量执行某些操作。 But I did not find any example or documentation.但我没有找到任何示例或文档。
I have a two tensors:我有两个张量:
Eigen::Tensor<float,3> feature_buffer(K,45,7); Eigen::Tensor<float,3> feature_buffer(K,45,7); feature_buffer.setZero();特征缓冲区.setZero();
VectorXi number_buffer(K); VectorXi number_buffer(K);
I need to perform below operations on tensor.我需要对张量执行以下操作。
feature_buffer[:, :, -3:] = feature_buffer[:, :, :3] - \
feature_buffer[:, :, :3].sum(axis=1, keepdims=True)/number_buffer.reshape(K, 1, 1)
The above code is numpy code.以上代码为numpy代码。 I did everything, but I am stuck at the final step.我做了一切,但我被困在最后一步。
Can someone please help me with this?有人可以帮我吗? I am stuck with this whole day.我被这一整天困住了。
Thanks in advance提前致谢
I believe the numpy
-operation is ill-posed in two places, where dimensions don't match up.我相信numpy
操作在两个地方不合适,尺寸不匹配。 I'm not super familiar with numpy ndarray
operations, so it could be a simple misunderstanding on my part, but if that operation succeeds, my guess is that numpy
can make educated guesses when some of the dimensions match up...我对numpy ndarray
操作不是很熟悉,所以这对我来说可能是一个简单的误解,但如果该操作成功,我的猜测是numpy
可以在某些尺寸匹配时做出有根据的猜测......
That said, I get the gist of what you are trying to accomplish, so I wrote down the equivalent C++ code below step by step.也就是说,我了解了您要完成的工作的要点,因此我逐步在下面写下了等效的 C++ 代码。 I took some liberties reinterpreting the operation to make dimensions match up properly: In the end, if it's not exactly the same operation, I hope just reading through the syntax may clear things up.我冒昧地重新解释了操作以使尺寸正确匹配:最后,如果它不是完全相同的操作,我希望只是阅读语法可以解决问题。
#include <unsupported/Eigen/CXX11/Tensor>
int main(){
long d0 = 10; // This is "K"
long d1 = 10;
long d2 = 10;
Eigen::Tensor<float,3> feature_buffer(d0,d1,d2);
Eigen::Tensor<float,1> number_buffer(d0);
feature_buffer.setRandom();
number_buffer.setRandom();
// Step 1) Define numpy "feature_buffer[:,:,-3:]" in C++
std::array<long,3> offsetA = {0, 0, d2-3};
std::array<long,3> extentA = {d0,d1,3};
auto feature_sliceA = feature_buffer.slice(offsetA,extentA);
// Note: feature_sliceA is a "slice" object: it does not own the data in feature_buffer,
// it merely points to a rectangular subregion inside of feature_buffer.
// If you'd rather make a copy of that data, replace "auto" with "Eigen::Tensor<float,3>".
// Step 2) Define numpy "feature_buffer[:, :, :3]" in C++
std::array<long,3> offsetB = {0, 0, 0};
std::array<long,3> extentB = {d0,d1,3};
auto feature_sliceB = feature_buffer.slice(offsetA,extentA);
// Step 3) Perform the numpy operation "feature_buffer[:, :, :3].sum(axis=1, keepdims=True)"
std::array<long,1> sumDims = {1};
std::array<long,3> newDims = {d0,1,3}; // This takes care of "keepdims=True": d1 is summed over, then kept as size 1.
Eigen::Tensor<float,3> feature_sum = feature_sliceB.sum(sumDims).reshape(newDims);
// Step 4) The numpy division "feature_buffer[:, :, :3].sum(axis=1, keepdims=True)/number_buffer.reshape(K, 1, 1)"
// looks ill-formed: There are fewer elements in [:, :, :3] than in number_buffer.reshape(K, 1, 1).
// To go head, we could interpret this as dividing each of the 3 "columns" (in dimension 2) by number_buffer:
// Something like: "feature_sum/number_buffer.reshape(d0, 1, 3)"
std::array<long,3> numBcast = {1,1,3};
std::array<long,3> numDims = {d0,1,1};
Eigen::Tensor<float,3> number_bcast = number_buffer.reshape(numDims).broadcast(numBcast);
// Step 5) Perform the division operation
Eigen::Tensor<float,3> feature_div = feature_sum/number_bcast;
// Step 6) Perform the numpy subtraction
// "feature_buffer[:, :, :3] - feature_buffer[:, :, :3].sum(axis=1, keepdims=True)/number_buffer.reshape(K, 1, 1)
// in our current program this corresponds to
// "feature_sliceB - feature_div"
// Actually, this is also ill-formed, since:
// feature_sliceB has dimensions (d0, d1, 3) = (10, 10, 3)
// feature_div has dimensions (d0, 1, 3) = (10, 1, 3)
//
// To go ahead we can reinterpret once again: Assume the subtraction happens once for each dimension 1.
// We use broadcast again to copy the contents of feature_div d1 times along dimension 1
std::array<long,3> divBcast = {1,10,1};
Eigen::Tensor<float,3> feature_div_bcast = feature_div.broadcast(divBcast);
// Step 7) Perform the main assignment operation
feature_sliceA = feature_sliceB - feature_div_bcast;
}
You can see the same code working on godbolt .您可以在godbolt上看到相同的代码。
I did not consider performance here at all.我根本没有考虑这里的表现。 I'm sure you can find better ways of writing this neatly.我相信你能找到更好的方法来整齐地写这个。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.