[英]How to draw 3D Coordinate Axes with OpenCV for face pose estimation?
只要您具有相機參數,就可以在純OpenCV中完成此操作。 您應該能夠創建與x,y,z軸相對應的三個向量(基本上是[0,0,0] [1、0、0],[0、1、0],[0、0、1]點),您將隨后投影到圖像平面中,首先應根據您的偏航/俯仰/橫滾旋轉這些點(例如,將它們乘以旋轉矩陣)。
為了將3D點投影到圖像平面,請使用projectPoints函數。 它需要3D點,相機參數並生成2D圖像點。 有了圖像點后,您可以簡單地使用line函數在投影的中心點(3D中為[0,0,0])與每個軸點投影之間繪制線。
一個簡單的例子:
def draw_axis(img, R, t, K):
# unit is mm
rotV, _ = cv2.Rodrigues(R)
points = np.float32([[100, 0, 0], [0, 100, 0], [0, 0, 100], [0, 0, 0]]).reshape(-1, 3)
axisPoints, _ = cv2.projectPoints(points, rotV, t, K, (0, 0, 0, 0))
img = cv2.line(img, tuple(axisPoints[3].ravel()), tuple(axisPoints[0].ravel()), (255,0,0), 3)
img = cv2.line(img, tuple(axisPoints[3].ravel()), tuple(axisPoints[1].ravel()), (0,255,0), 3)
img = cv2.line(img, tuple(axisPoints[3].ravel()), tuple(axisPoints[2].ravel()), (0,0,255), 3)
return img
我用了這段代碼。 它來自 Basel Face model(BFM),您可以從他們的 web 站點找到 matlab 代碼
def draw_axis(img, yaw, pitch, roll, tdx=None, tdy=None, size = 100):
pitch = pitch * np.pi / 180
yaw = -(yaw * np.pi / 180)
roll = roll * np.pi / 180
if tdx != None and tdy != None:
tdx = tdx
tdy = tdy
else:
height, width = img.shape[:2]
tdx = width / 2
tdy = height / 2
# X-Axis pointing to right. drawn in red
x1 = size * (math.cos(yaw) * math.cos(roll)) + tdx
y1 = size * (math.cos(pitch) * math.sin(roll) + math.cos(roll) * math.sin(pitch) * math.sin(yaw)) + tdy
# Y-Axis | drawn in green
# v
x2 = size * (-math.cos(yaw) * math.sin(roll)) + tdx
y2 = size * (math.cos(pitch) * math.cos(roll) - math.sin(pitch) * math.sin(yaw) * math.sin(roll)) + tdy
# Z-Axis (out of the screen) drawn in blue
x3 = size * (math.sin(yaw)) + tdx
y3 = size * (-math.cos(yaw) * math.sin(pitch)) + tdy
cv2.line(img, (int(tdx), int(tdy)), (int(x1),int(y1)),(0,0,255),3)
cv2.line(img, (int(tdx), int(tdy)), (int(x2),int(y2)),(0,255,0),3)
cv2.line(img, (int(tdx), int(tdy)), (int(x3),int(y3)),(255,0,0),3)
return img
對上面給出的代碼的一些說明
def draw_axis(img, rotation_vec, t, K, scale=0.1, dist=None):
"""
Draw a 6dof axis (XYZ -> RGB) in the given rotation and translation
:param img - rgb numpy array
:rotation_vec - euler rotations, numpy array of length 3,
use cv2.Rodrigues(R)[0] to convert from rotation matrix
:t - 3d translation vector, in meters (dtype must be float)
:K - intrinsic calibration matrix , 3x3
:scale - factor to control the axis lengths
:dist - optional distortion coefficients, numpy array of length 4. If None distortion is ignored.
"""
img = img.astype(np.float32)
dist = np.zeros(4, dtype=float) if dist is None else dist
points = scale * np.float32([[1, 0, 0], [0, 1, 0], [0, 0, 1], [0, 0, 0]]).reshape(-1, 3)
axis_points, _ = cv2.projectPoints(points, rotation_vec, t, K, dist)
img = cv2.line(img, tuple(axis_points[3].ravel()), tuple(axis_points[0].ravel()), (255, 0, 0), 3)
img = cv2.line(img, tuple(axis_points[3].ravel()), tuple(axis_points[1].ravel()), (0, 255, 0), 3)
img = cv2.line(img, tuple(axis_points[3].ravel()), tuple(axis_points[2].ravel()), (0, 0, 255), 3)
return img
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.