簡體   English   中英

使用相機內在矩陣在二維圖像上進行 3D 網格項目

[英]Project 3D mesh on 2d image using camera intrinsic matrix

我一直在嘗試使用HOnnotate 數據集來提取透視正確的手和 object 掩碼,如Hands-2019 挑戰的 Task-3 的圖像所示。

該數據集帶有以下注釋:

annotations:
    The annotations are provided in pickled files under meta folder for each sequence. The pickle files in the training data contain a dictionary with the following keys:
    objTrans: A 3x1 vector representing object translation
    objRot: A 3x1 vector representing object rotation in axis-angle representation
    handPose: A 48x1 vector represeting the 3D rotation of the 16 hand joints including the root joint in axis-angle representation. The ordering of the joints follow the MANO model convention (see joint_order.png) and can be directly fed to MANO model.
    handTrans: A 3x1 vector representing the hand translation
    handBeta: A 10x1 vector representing the MANO hand shape parameters
    handJoints3D: A 21x3 matrix representing the 21 3D hand joint locations
    objCorners3D: A 8x3 matrix representing the 3D bounding box corners of the object
    objCorners3DRest: A 8x3 matrix representing the 3D bounding box corners of the object before applying the transormation
    objName: Name of the object as given in YCB dataset
    objLabel: Object label as given in YCB dataset
    camMat: Intrinsic camera parameters
    handVertContact: A 778D boolean vector whose each element represents whether the corresponding MANO vertex is in contact with the object. A MANO vertex is in contact if its distance to the object surface is <4mm
    handVertDist: A 778D float vector representing the distance of MANO vertices to the object surface.
    handVertIntersec: A 778D boolean vector specifying if the MANO vertices are inside the object surface.
    handVertObjSurfProj: A 778x3 matrix representing the projection of MANO vertices on the object surface.

它還帶有一個可視化腳本( https://github.com/shreyashampali/ho3d ),可以將注釋呈現為 3D 網格(使用 Open3D)或 ZA8CFDE6331BD59EB2ACZ6F8 上的 2D 項目(149EB2ACZ6F8)

在此處輸入圖像描述 在此處輸入圖像描述

我要做的是將 Open3D 創建的可視化投影回原始圖像。

到目前為止,我還無法做到這一點。 我能夠做的是從 3d 網格中獲取點雲,並在其上應用相機內在以使其透視正確,現在的問題是如何從點雲中為雙手和物體創建遮罩,例如一個來自 Open3d 渲染。

# code looks as follows
# "mesh" is an Open3D triangle mesh ie "open3d.geometry.TriangleMesh()" 
pcd = open3d.geometry.PointCloud()
pcd.points = mesh.vertices
pcd.colors = mesh.vertex_colors
pcd.normals = mesh.vertex_normals

pts3D = np.asarray(pcd.points)
# hand/object along negative z-axis so need to correct perspective when plotting using OpenCV
cord_change_mat = np.array([[1., 0., 0.], [0, -1., 0.], [0., 0., -1.]], dtype=np.float32)
pts3D = pts3D.dot(cord_change_mat.T)

# "anno['camMat']" is camera intrinsic matrix 
img_points, _ = cv2.projectPoints(pts3D, (0, 0, 0), (0, 0, 0), anno['camMat'], np.zeros(4, dtype='float32'))

# draw perspective correct point cloud back on the image
for point in img_points:
    p1, p2 = int(point[0][0]), int(point[0][1])
    img[p2, p1] = (255, 255, 255)

在此處輸入圖像描述

基本上,我試圖得到這個分割掩碼:

在此處輸入圖像描述

PS。 抱歉,如果這沒有多大意義,我對 3D 網格、點雲及其投影非常陌生。 我還不知道他們所有正確的技術詞匯。 有問題發表評論,我可以盡我所能解釋。

事實證明,有一種簡單的方法可以使用 Open3D 和相機內在值來完成這項任務。 基本上,我們指示 Open3D 從相機的 POV 渲染圖像。


import open3d
import open3d.visualization.rendering as rendering

# Create a renderer with a set image width and height
render = rendering.OffscreenRenderer(img_width, img_height)

# setup camera intrinsic values
pinhole = open3d.camera.PinholeCameraIntrinsic(img_width, img_height, fx, fy, cx, cy)
    
# Pick a background colour of the rendered image, I set it as black (default is light gray)
render.scene.set_background([0.0, 0.0, 0.0, 1.0])  # RGBA

# now create your mesh
mesh = open3d.geometry.TriangleMesh()
mesh.paint_uniform_color([1.0, 0.0, 0.0]) # set Red color for mesh 
# define further mesh properties, shape, vertices etc  (omitted here)  

# Define a simple unlit Material.
# (The base color does not replace the arrows' own colors.)
mtl = o3d.visualization.rendering.Material()
mtl.base_color = [1.0, 1.0, 1.0, 1.0]  # RGBA
mtl.shader = "defaultUnlit"

# add mesh to the scene
render.scene.add_geometry("MyMeshModel", mesh, mtl)

# render the scene with respect to the camera
render.scene.camera.set_projection(camMat, 0.1, 1.0, 640, 480)
img_o3d = render.render_to_image()

# we can now save the rendered image right at this point 
open3d.io.write_image("output.png", img_o3d, 9)


# Optionally, we can convert the image to OpenCV format and play around.
# For my use case I mapped it onto the original image to check quality of 
# segmentations and to create masks.
# (Note: OpenCV expects the color in BGR format, so swap red and blue.)
img_cv2 = cv2.cvtColor(np.array(img_o3d), cv2.COLOR_RGBA2BGR)
cv2.imwrite("cv_output.png", img_cv2)

這個答案從這個答案中借了很多東西

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM