简体   繁体   English

如何使用具有 1:N 输入/输出的 Triton 服务器“集成模型”从大图像创建补丁?

[英]How to use Triton server "ensemble model" with 1:N input/output to create patches from large image?

I am trying to feed a very large image into Triton server.我正在尝试将一个非常大的图像输入 Triton 服务器。 I need to divide the input image into patches and feed the patches one by one into a tensorflow model.我需要将输入图像分成补丁,并将补丁一个一个地输入 tensorflow model。 The image has a variable size, so the number of patches N is variable for each call.图像具有可变大小,因此每次调用的补丁数 N 都是可变的。

I think a Triton ensemble model that calls the following steps would do the job:我认为调用以下步骤的 Triton 合奏 model 可以完成这项工作:

  1. A python model (pre-process) to create the patches A python model(预处理)来创建补丁
  2. The segmentation model分段 model
  3. Finally another python model (post-process) to merge the output patches into a big output mask最后另一个 python model (后处理)将 output 补丁合并到一个大的 Z78E6221F6393D1356DZF3814CEDB1

However, for this, I would have to write a config. pbtxt但是,为此,我必须编写一个config. pbtxt config. pbtxt file with 1:N and N:1 relation, meaning the ensemble scheduler needs to call the 2nd step multiple times and the 3rd once with the aggregated output. config. pbtxt文件具有1:NN:1关系,这意味着集成调度程序需要多次调用第二步,第三步需要调用聚合的 output。

Is this possible, or do I need to use some other technique?这可能吗,还是我需要使用其他技术?

Disclaimer免责声明

The below answer may not give the exact what you want (according to my understanding of your question).以下答案可能无法准确给出您想要的内容(根据我对您问题的理解)。 Rather, we will present some general functionality from this implementation which is slice an image into smaller patches , pass these patches to the model, and stitching them back to final results.相反,我们将展示实现的一些通用功能,即将图像分割成更小的补丁,将这些补丁传递给 model,然后将它们拼接回最终结果。 In summary:总之:

  • slicing into smaller patches切成小块
  • send them to model and store patch output将它们发送到 model 并存储补丁 output
  • rejoin patches重新加入补丁

Input输入

import cv2 
import matplotlib.pyplot as plt

input_img = cv2.imread('/content/2.jpeg')
print(input_img.shape) # (719, 640, 3)
plt.imshow(input_img) 

Slice and Stitch切片和缝合

The following functionality is adopted from here .这里采用以下功能。 More details and discussion can be found here.更多细节和讨论可以在这里找到。 . . Apart from the original code, we bring together the necessary functionality and put them in a single class ( ImageSliceRejoin ).除了原始代码之外,我们还汇集了必要的功能并将它们放在一个 class ( ImageSliceRejoin ) 中。

# ref: https://github.com/idealo/image-super-resolution
class ImageSliceRejoin:
    def pad_patch(self, image_patch, padding_size, channel_last=True):
        """ Pads image_patch with padding_size edge values. """
        if channel_last:
            return np.pad(
                image_patch,
                ((padding_size, padding_size), 
                (padding_size, padding_size), (0, 0)),
                'edge',
            )
        else:
            return np.pad(
                image_patch,
                ((0, 0), (padding_size, padding_size), (padding_size, padding_size)),
                'edge',
            )

    # function to split the image into patches        
    def split_image_into_overlapping_patches(self, image_array, patch_size, padding_size=2):
        """ Splits the image into partially overlapping patches.
        The patches overlap by padding_size pixels.
        Pads the image twice:
            - first to have a size multiple of the patch size,
            - then to have equal padding at the borders.
        Args:
            image_array: numpy array of the input image.
            patch_size: size of the patches from the original image (without padding).
            padding_size: size of the overlapping area.
        """
        xmax, ymax, _ = image_array.shape
        x_remainder = xmax % patch_size
        y_remainder = ymax % patch_size
        
        # modulo here is to avoid extending of patch_size instead of 0
        x_extend = (patch_size - x_remainder) % patch_size
        y_extend = (patch_size - y_remainder) % patch_size
        
        # make sure the image is divisible into regular patches
        extended_image = np.pad(image_array, ((0, x_extend), (0, y_extend), (0, 0)), 'edge')
        
        # add padding around the image to simplify computations
        padded_image = self.pad_patch(extended_image, padding_size, channel_last=True)
        
        xmax, ymax, _ = padded_image.shape
        patches = []
        
        x_lefts = range(padding_size, xmax - padding_size, patch_size)
        y_tops = range(padding_size, ymax - padding_size, patch_size)
        
        for x in x_lefts:
            for y in y_tops:
                x_left = x - padding_size
                y_top = y - padding_size
                x_right = x + patch_size + padding_size
                y_bottom = y + patch_size + padding_size
                patch = padded_image[x_left:x_right, y_top:y_bottom, :]
                patches.append(patch)
        
        return np.array(patches), padded_image.shape

    # joing the patches 
    def stich_together(self, patches, padded_image_shape, target_shape, padding_size=4):
        """ Reconstruct the image from overlapping patches.
        After scaling, shapes and padding should be scaled too.
        Args:
            patches: patches obtained with split_image_into_overlapping_patches
            padded_image_shape: shape of the padded image contructed in split_image_into_overlapping_patches
            target_shape: shape of the final image
            padding_size: size of the overlapping area.
        """
        xmax, ymax, _ = padded_image_shape

        # unpad patches
        patches = patches[:, padding_size:-padding_size, padding_size:-padding_size, :]

        patch_size = patches.shape[1]
        n_patches_per_row = ymax // patch_size
        complete_image = np.zeros((xmax, ymax, 3))

        row = -1
        col = 0
        for i in range(len(patches)):
            if i % n_patches_per_row == 0:
                row += 1
                col = 0
            complete_image[
            row * patch_size: (row + 1) * patch_size, col * patch_size: (col + 1) * patch_size, :
            ] = patches[i]
            col += 1
        return complete_image[0: target_shape[0], 0: target_shape[1], :]

Initiate Slicing开始切片

import numpy as np 

isr = ImageSliceRejoin()
padding_size = 1

patches, p_shape = isr.split_image_into_overlapping_patches(
    input_img, 
    patch_size=220, 
    padding_size=padding_size
)

patches.shape, p_shape, input_img.shape
((12, 222, 222, 3), (882, 662, 3), (719, 640, 3))

Verify核实

n = np.ceil(patches.shape[0] / 2)
plt.figure(figsize=(20, 20))
patch_size = patches.shape[1]

for i in range(patches.shape[0]):
    patch = patches[i] 
    ax = plt.subplot(n, n, i + 1)
    patch_img = np.reshape(patch, (patch_size, patch_size, 3))
    plt.imshow(patch_img.astype("uint8"))
    plt.axis("off")

在此处输入图像描述

Inference推理

I'm using the Image-Super-Resolution model for demonstration.我正在使用Image-Super-Resolution model 进行演示。

# import model
from ISR.models import RDN
model = RDN(weights='psnr-small')

# number of patches that will pass to model for inference: 
# here, batch_size < len(patches)
batch_size = 2

for i in range(0, len(patches), batch_size):
    # get some patches
    batch = patches[i: i + batch_size]

    # pass them to model to give patches output 
    batch = model.model.predict(batch)

    # save the output patches 
    if i == 0:
        collect = batch
    else:
        collect = np.append(collect, batch, axis=0)

Now, the collect holds the output of each patch from the model.现在,该集合包含来自collect的每个补丁的 output。

patches.shape, collect.shape
((12, 222, 222, 3), (12, 444, 444, 3))

Rejoin Patches重新加入补丁

scale = 2
padded_size_scaled = tuple(np.multiply(p_shape[0:2], scale)) + (3,)
scaled_image_shape = tuple(np.multiply(input_img.shape[0:2], scale)) + (3,)

sr_img = isr.stich_together(
    collect,
    padded_image_shape=padded_size_scaled,
    target_shape=scaled_image_shape,
    padding_size=padding_size * scale,
)

Verify核实

print(input_img.shape, sr_img.shape)
# (719, 640, 3) (1438, 1280, 3)

fig, ax = plt.subplots(1,2)
fig.set_size_inches(18.5, 10.5)
ax[0].imshow(input_img)
ax[1].imshow(sr_img.astype('uint8'))

在此处输入图像描述

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM