[英]Resize bounding box according to image
我在Python中实现对象本地化。 我遇到的一个问题是,当我在采取行动时调整可观察区域的大小时,我不知道如何同时更改地面实况框。 因此,发生这种情况:
地面实况框不会调整大小以准确适应飞机。 因此,我无法正确本地化。 我当前格式化下一个状态的函数如下:
def next_state(init_input, b, b_prime, g, a):
"""
Returns the observable region of the next state.
Formats the next state's observable region, defined
by b_prime, to be of dimension (224, 224, 3). Adding 16
additional pixels of context around the original bounding box.
The ground truth box must be reformatted according to the
new observable region.
:param init_input:
The initial input volume of the current episode.
:param b:
The current state's bounding box.
:param b_prime:
The subsequent state's bounding box.
:param g:
The ground truth box of the target object.
:param a:
The action taken by the agent at the current step.
"""
# Determine the pixel coordinates of the observable region for the following state
context_pixels = 16
x1 = max(b_prime[0] - context_pixels, 0)
y1 = max(b_prime[1] - context_pixels, 0)
x2 = min(b_prime[2] + context_pixels, IMG_SIZE)
y2 = min(b_prime[3] + context_pixels, IMG_SIZE)
# Determine observable region
observable_region = cv2.resize(init_input[y1:y2, x1:x2], (224, 224))
# Difference between crop region and image dimensions
x1_diff = x1
y1_diff = y1
x2_diff = IMG_SIZE - x2
y2_diff = IMG_SIZE - y2
# Resize ground truth box
g[0] = int(g[0] - 0.5 * x1_diff) # x1
g[1] = int(g[1] - 0.5 * y1_diff) # y1
g[2] = int(g[2] + 0.5 * x2_diff) # x2
g[3] = int(g[3] + 0.5 * y2_diff) # y2
return observable_region, g
我似乎无法正确地改变尺寸。 我按照这篇文章来初步调整边界框的大小。 然而,在这种情况下,该解决方案似乎并不起作用。
边界框/地面实况框的格式为: b = [x1, y1, x2, y2]
init_input
具有维度(224, 224, 3)
init_input
(224, 224, 3)
。 IMG_SIZE = 224
, context_pixels = 16
这是一个额外的例子:
似乎地面实况盒的大小是正确的,但位置是关闭的。
我已经更新了上面的代码部分。 比例因子似乎是解决问题的错误方法。 通过添加/减去要放大的像素数量,我已经接近了很多。 我相信现在与插值有关,所以如果有人可以帮忙解决这个问题,那将是一个很大的帮助。
新例子:
提供了一种解决方案 。
我的问题在这篇文章中被名为@lenik的用户解决了。
在将比例因子应用于地面实况框g
的像素坐标之前,必须首先减去零偏移,使x1, y1
变为0, 0
。 这允许缩放工作正常。
因此,变换后任意随机点(x,y)
的坐标可以计算为:
x_new = (x - x1) * IMG_SIZE / (x2 - x1)
y_new = (y - y1) * IMG_SIZE / (y2 - y1)
在代码中以及与我的问题相关的解决方案如下:
def next_state(init_input, b_prime, g):
"""
Returns the observable region of the next state.
Formats the next state's observable region, defined
by b_prime, to be of dimension (224, 224, 3). Adding 16
additional pixels of context around the original bounding box.
The ground truth box must be reformatted according to the
new observable region.
:param init_input:
The initial input volume of the current episode.
:param b_prime:
The subsequent state's bounding box.
:param g:
The ground truth box of the target object.
"""
# Determine the pixel coordinates of the observable region for the following state
context_pixels = 16
x1 = max(b_prime[0] - context_pixels, 0)
y1 = max(b_prime[1] - context_pixels, 0)
x2 = min(b_prime[2] + context_pixels, IMG_SIZE)
y2 = min(b_prime[3] + context_pixels, IMG_SIZE)
# Determine observable region
observable_region = cv2.resize(init_input[y1:y2, x1:x2], (224, 224), interpolation=cv2.INTER_AREA)
# Resize ground truth box
g[0] = int((g[0] - x1) * IMG_SIZE / (x2 - x1)) # x1
g[1] = int((g[1] - y1) * IMG_SIZE / (y2 - y1)) # y1
g[2] = int((g[2] - x1) * IMG_SIZE / (x2 - x1)) # x2
g[3] = int((g[3] - y1) * IMG_SIZE / (y2 - y1)) # y2
return observable_region, g
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.