简体   繁体   English

从Mask_RCNN张量检索信息

[英]Retrieving information from a Mask_RCNN Tensor

I've succesfully trained a Mask_RCNN , and for illustration purposes, let's focus on this sample image the network generates: 我已经成功地训练了Mask_RCNN ,出于说明目的,让我们关注网络生成的示例图像:

在此处输入图片说明

It's all very good, no problem. 一切都很好,没问题。 What I'd like to achieve however is to have the following variables with their values per instance: 但是我想要实现的是在每个实例中包含以下变量及其值:

   mask:  (as an image which shows the detected object only, like a binary map)
   box: (as a list)
   mask_border_positions (x,y) : (as a list)
   mask_center_position (x,y) :  (as a tuple)

I've also the function which visualizes the above image, from the official site : 我还有从上面的网站上可视化上面图像的功能:

def display_instances(image, boxes, masks, class_ids, class_names,
                      scores=None, title="",
                      figsize=(16, 16), ax=None,
                      show_mask=True, show_bbox=True,
                      colors=None, captions=None):
    """
    boxes: [num_instance, (y1, x1, y2, x2, class_id)] in image coordinates.
    masks: [height, width, num_instances]
    class_ids: [num_instances]
    class_names: list of class names of the dataset
    scores: (optional) confidence scores for each box
    title: (optional) Figure title
    show_mask, show_bbox: To show masks and bounding boxes or not
    figsize: (optional) the size of the image
    colors: (optional) An array or colors to use with each object
    captions: (optional) A list of strings to use as captions for each object
    """
    # Number of instances
    N = boxes.shape[0]
    if not N:
        print("\n*** No instances to display *** \n")
    else:
        assert boxes.shape[0] == masks.shape[-1] == class_ids.shape[0]

    # If no axis is passed, create one and automatically call show()
    auto_show = False
    if not ax:
        _, ax = plt.subplots(1, figsize=figsize)
        auto_show = True

    # Generate random colors
    colors = colors or random_colors(N)

    # Show area outside image boundaries.
    height, width = image.shape[:2]
    ax.set_ylim(height + 10, -10)
    ax.set_xlim(-10, width + 10)
    ax.axis('off')
    ax.set_title(title)

    masked_image = image.astype(np.uint32).copy()
    for i in range(N):
        color = colors[i]

        # Bounding box
        if not np.any(boxes[i]):
            # Skip this instance. Has no bbox. Likely lost in image cropping.
            continue
        y1, x1, y2, x2 = boxes[i]
        if show_bbox:
            p = patches.Rectangle((x1, y1), x2 - x1, y2 - y1, linewidth=2,
                                alpha=0.7, linestyle="dashed",
                                edgecolor=color, facecolor='none')
            ax.add_patch(p)

        # Label
        if not captions:
            class_id = class_ids[i]
            score = scores[i] if scores is not None else None
            label = class_names[class_id]
            x = random.randint(x1, (x1 + x2) // 2)
            caption = "{} {:.3f}".format(label, score) if score else label
        else:
            caption = captions[i]
        ax.text(x1, y1 + 8, caption,
                color='w', size=11, backgroundcolor="none")

        # Mask
        mask = masks[:, :, i]
        if show_mask:
            masked_image = apply_mask(masked_image, mask, color)

        # Mask Polygon
        # Pad to ensure proper polygons for masks that touch image edges.
        padded_mask = np.zeros(
            (mask.shape[0] + 2, mask.shape[1] + 2), dtype=np.uint8)
        padded_mask[1:-1, 1:-1] = mask
        contours = find_contours(padded_mask, 0.5)
        for verts in contours:
            # Subtract the padding and flip (y, x) to (x, y)
            verts = np.fliplr(verts) - 1
            p = Polygon(verts, facecolor="none", edgecolor=color)
            ax.add_patch(p)
    ax.imshow(masked_image.astype(np.uint8))
    if auto_show:
        plt.show()

These code snippets below are then called in the main as follows: 然后,以下这些代码段在主体中按如下方式调用:

file_names = glob(os.path.join(IMAGE_DIR, "*.jpg"))
masks_prediction = np.zeros((510, 510, len(file_names)))
for i in range(len(file_names)):
    print(i)
    image = skimage.io.imread(file_names[i])
    predictions = model.detect([image],  verbose=1)
    p = predictions[0]
    masks = p['masks']
    merged_mask = np.zeros((masks.shape[0], masks.shape[1]))
    for j in range(masks.shape[2]):
        merged_mask[masks[:,:,j]==True] = True
        masks_prediction[:,:,i] = merged_mask
print(masks_prediction.shape)

and: 和:

file_names = glob(os.path.join(IMAGE_DIR, "*.jpg"))
class_names = ['BG', 'car', 'traffic_light', 'person']
test_image = skimage.io.imread(file_names[random.randint(0,len(file_names)-1)])
predictions = model.detect([test_image], verbose=1) # We are replicating the same image to fill up the batch_size
p = predictions[0]
visualize.display_instances(test_image, p['rois'], p['masks'], p['class_ids'], 
                            class_names, p['scores'])

I know it's probably a trivial question and they already exist in the code somewhere, but since I am a starter, I could not get the mask outliers or their centers. 我知道这可能是一个琐碎的问题,它们已经存在于代码中的某个地方,但是由于我是一个入门者,所以无法获得蒙版异常值或其中心。 If there is a way to have these information per instance, it would be great. 如果有一种方法可以按实例获取这些信息,那就太好了。

Thanks in advance. 提前致谢。

The following does it right: 以下是正确的:

masks = p['masks']
class_ids = p['class_ids']
rois = p['rois']
scores = p['scores']
bounding_box = rois[enumerator]

as for the outline coordinates: 至于轮廓坐标:

def getBoundaryPositions(im):

    class_ids = p['class_ids']  # for usage convenience

    im = im.astype(np.uint8)

    # Find contours:

    (im, contours, hierarchy) = cv2.findContours(im, cv2.RETR_EXTERNAL,
            cv2.CHAIN_APPROX_NONE)
    cnts = contours[0]
    outline_posesXY = np.array([np.append(x[0]) for x in cnts])


    # Calculate image moments of the detected contour
    M = cv2.moments(contours[0])

    # collect pose points (for now only position because we don't have pose) of the center
    positionXY = []
    positionXY.append(round(M['m10'] / M['m00']))
    positionXY.append(round(M['m01'] / M['m00']))


    return (im, positionXY, outline_posesXY)

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 FileNotFoundError("No such file: '%s'" % fn) FileNotFoundError: No such file: (Mask_RCNN) - FileNotFoundError("No such file: '%s'" % fn) FileNotFoundError: No such file: (Mask_RCNN) Mask_RCNN/samples/demo.ipynb 的问题(AttributeError: module 'tensorflow' has no attribute 'log') - Problem with Mask_RCNN/samples/demo.ipynb ( AttributeError: module 'tensorflow' has no attribute 'log') TypeError:尝试训练 MASK_RCNN 实现时,字符串索引必须是整数 - TypeError: string indices must be integers while trying to train MASK_RCNN implementation 如何在matterport/Mask_RCNN 示例中减少batch_size 和image_shape? - How can I reduce the batch_size and the image_shape in the matterport/Mask_RCNN example? FileNotFoundError:没有这样的文件:'/content/Mask_RCNN/dataset/val/download.jpg' - FileNotFoundError: No such file: '/content/Mask_RCNN/dataset/val/download.jpg' 在自己的数据集上训练。 Mask_RCNN 资源耗尽:分配时OOM - Train on own data set. Mask_RCNN Resource exhausted: OOM when allocating 如何使用 mask_rcnn 保存图像结果 - how can i save my result of image using mask_rcnn 如何使用 coremltools 将 matterport mask_rcnn keras(.h5) model 转换为 coreml 模型(.mlmodel) - How to convert matterport mask_rcnn keras(.h5) model to coreml model(.mlmodel) using coremltools 从掩码 rcnn 中提取分割掩码 - extract segmentation masks from mask rcnn Tensorflow 从张量和掩码创建一个新的张量 - Tensorflow create a new Tensor from a Tensor and a Mask
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM