简体   繁体   中英

How to get data from Tensor object in c++

I am running a Tensorflow model return a 3D array as output, and I couldn't get that array of data from the tensor.

I did print the shape of the output of the model without any problem.

std::vector<tf::Tensor>        outputs;
 auto start_inference = std::chrono::high_resolution_clock::now();
 _status = _session->Run({inputs}, {"k2tfout_0", "k2tfout_1"}, {}, &outputs);
if (!_status.ok())
 {
   std::cerr << _status.ToString() << std::endl;
   return 0;
 }
unsigned int output_img_n0 = outputs[0].shape().dim_size(0);
unsigned int output_img_h0 = outputs[0].shape().dim_size(1);
unsigned int output_img_w0 = outputs[0].shape().dim_size(2);
unsigned int output_img_c0 = outputs[0].shape().dim_size(3);

That code worked without any error and showed the shape of the array. But still, I couldn't get the data from the outputs Tensor object.

The only function is worked is

float_t *plant_pointer = outputs[1].flat<float_t>().data();

But it destroy the array shape.

EDIT:
The output shape of the tensor is [num,high,width,channel] === [1,480,600,3]. So, the output is an image of semantic segmentation image of the model. I just want the image part without the first dim which always be 1.

The tensorflow::Tensor class allows you to access its contents through several methods. With .flat you get a flattened version of the array, .tensor gives you a full Eigen tensor , and then there are a few other like .vec / .matrix (like .tensor with number of dimensions fixed to 1 or 2) and flat_inner_dims / flat_outer_dims / flat_inner_outer_dims (gives you a tensor with some dimensions collapsed). You can use the one that suits you best. In this case, for example if you want to print all the values in the tensor, you can use .flat and compute the corresponding offset or use .tensor if you know that the number of dimensions is 4:

std::vector<tf::Tensor>        outputs;
auto start_inference = std::chrono::high_resolution_clock::now();
_status = _session->Run({inputs}, {"k2tfout_0", "k2tfout_1"}, {}, &outputs);
if (!_status.ok())
{
  std::cerr << _status.ToString() << std::endl;
  return 0;
}
unsigned int output_img_n0 = outputs[0].shape().dim_size(0);
unsigned int output_img_h0 = outputs[0].shape().dim_size(1);
unsigned int output_img_w0 = outputs[0].shape().dim_size(2);
unsigned int output_img_c0 = outputs[0].shape().dim_size(3);

for (unsigned int ni = 0; ni < output_img_n0; ni++)
{
  for (unsigned int hi = 0; hi < output_img_h0; hi++)
  {
    for (unsigned int wi = 0; wi < output_img_w0; wi++)
    {
      for (unsigned int ci = 0; ci < output_img_c0; ci++)
      {
        float_t value;
        // Get vaule through .flat()
        unsigned int offset = ni * output_img_h0 * output_img_w0 * output_img_c0 +
                              hi * output_img_w0 * output_img_c0 +
                              wi * output_img_c0 +
                              ci;
        value = outputs[0].flat<float_t>()(offset);
        // Get value through .tensor()
        value = outputs[0].tensor<float_t, 4>()(ni, hi, wi, ci);
        std::cout << "output[0](" << ni << ", " << hi << ", " << wi << ", " << ci << ") = ";
        std::cout << value << std::endl;
      }
    }
  }
}

Note that, although these methods create Eigen::TensorMap objects, which are not really expensive, you may prefer to call them only once and then query the tensor object multiple times. For example:

// Make tensor
tf::TTypes<float_t, 4>::Tensor outputTensor0 = outputs[0].tensor<float_t, 4>();
// Query tensor multiple times
for (...)
{
    std::cout << outputTensor0(ni, hi, wi, ci) << std::endl;
}

EDIT:

If you want to obtain a pointer to the data of the tensor (for example to build another object from the same buffer avoiding copies or iteration), you can also do that. One option is to use the .tensor_data method, which returns a tensorflow::StringPiece , which is in turn a absl::string_view , which is just a polyfill for std::string_view . So the .data method of this object will give you a pointer to the underlying byte buffer for the tensor (note the warning in the documentation of .tensor_data : "the underlying tensor buffer is refcounted", so do not let the returned object be destroyed while you use the buffer). You can therefore do:

tf::StringPiece output0Str = outputs[0].tensor_data();
const char* output0Ptr = output0Str.data();

This however gives you a pointer to char so you would have to cast it to use it as float. It should be safe, but it looks ugly, so you can let Eigen do that for you. All Eigen objects have a .data method that returns a pointer of its type to the underlying buffer. For example:

const float_t* output0Ptr = outputs[0].flat<float_t>().data();

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM