Firstly, I assume people are familiar with Python numpy.tensordot . Here I use a simple instance of that as follows (pseudocode):
A.shape = (1, x, y)
B.shape = (x, y, z, t)
C = numpy.tensordot(A, B)
C.shape = (1, z, t)
Now imagine A and C above are greyscale images (1 channel), and there is an image transformation that turns A into C. To be specific, assume people are familiar with OpenCV in Python and the functions cv2.warpAffine and cv2.warpPerspective , let's (pseudocode):
C = cv2.warpSomething(A, **kwargs)
My question is that, assume the above equations hold, then how to compute B (efficiently enough) from the variables (pseudocode):
x, y, z, t, the_transformation (i.e. warpAffine or warpPerspective, M, flags, borderMode, borderValue)
I'm also satisfied if one can produce B from only (warp, x, y, z, t, M), fixing flags=INTER_LINEAR, borderMode=BORDER_CONSTANT and borderValue=0.
Thanks in advance!
If there are N
pixels in both A and C, then the transformation tensor B has N**2
components. For N
on the order of 1E+6, you really don't want to store the B tensor. If it's a very small dataset, you could try something like this:
# assuming C and A are already initialized.
B = np.zeros(A.shape + C.shape)
A1 = np.zeros_like(A)
for i in range(A1.shape[0]):
for j in range(A1.shape[1]):
A1[i, j] = 1
B[i, j, :, :] = affine_something(A1)
A1[i, j] = 0
But this is still very slow and inefficient.
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.