简体   繁体   English

Kinect + OpenCV:无法在python中使用cv2.solvePnP获取旋转向量

[英]Kinect + OpenCV : Unable to fetch rotational vectors using cv2.solvePnP in python

I am working on a project where I require to track aerial objects and calculate the six degree of freedom. 我正在开展一个项目,我需要跟踪空中物体并计算六个自由度。

  1. I am currently tracking colored balls , and calculating their center in rgb_frame, and using the center values to find the depth in the depth_frame. 我目前正在跟踪彩球,并在rgb_frame中计算它们的中心,并使用中心值来查找depth_frame中的深度。

  2. After finding the depth(Z) in real-world co-ords,I am calculating the real-world X and Y using equations: X = (Z u)/fx and Y = (Z v)/fy , where fx,fy are the focal length obtained from the kinect's intrinsic params ,and u and v in this case are the center's x,y values. 在真实世界中找到深度(Z)后,我使用公式计算现实世界的X和Y:X =(Z u)/ fx和Y =(Z v)/ fy,其中fx,fy是从kinect的内部参数获得的焦距,在这种情况下u和v是中心的x,y值。

  3. I am treating (u,v) as image point and (X,Y,Z) as image point and feeding into this method: solvePnP 我将(u,v)视为图像点,将(X,Y,Z)视为图像点,并将此方法投入使用:solvePnP

    obj_pts = np.array([[X,Y,Z]],np.float64) img_pts = np.array([[u,v]],np.float64) obj_pts = np.array([[X,Y,Z]],np.float64) img_pts = np.array([[u,v]],np.float64)

    ret,rvecs,tvecs = cv2.solvePnP(obj_pts,img_pts,camera_matrix2,np_dist_coefs)

I expect to find the rvecs which I will use as input for: 我期待找到我将用作输入的rvecs:

cv2.Rodrigues(rvecs)

to get the euler angles namely, pitch, yaw, roll. 得到欧拉角,即俯仰,偏航,滚动。

I am presently having issues with the solvePnP call, which gives me the following error: 我目前遇到的solvePnP调用问题,它给我以下错误:

/opencv-3.0.0/modules/calib3d/src/solvepnp.cpp:61: error: (-215) npoints >= 0 && npoints == std::max(ipoints.checkVector(2, CV_32F), ipoints.checkVector(2, CV_64F)) in function solvePnP /opencv-3.0.0/modules/calib3d/src/solvepnp.cpp:61:错误:(-215)npoints> = 0 && npoints == std :: max(ipoints.checkVector(2,CV_32F),ipoints.checkVector (2,CV_64F))函数solvePnP

I also understand that sending just the center's object and image points is not recommended. 我也明白不建议只发送中心的物体和图像点。 By this is my first step towards the realization. 这是我迈向实现的第一步。 I intend to use feature detectors like SIFT to make it more interesting later. 我打算使用像SIFT这样的特征检测器,以便以后更有趣。

Can anyone please suggest on my approach and help me accomplish finding the six degrees of freedom: 任何人都可以建议我的方法,并帮助我找到六个自由度:

forward/back, up/down, left/right, pitch, yaw, roll . forward/back, up/down, left/right, pitch, yaw, roll

While my approach was exactly correct, I missed the fact that to calculate the orientation of a 3D object in real-coordinates we need atleast 4+ corresponding point sets in pixel and object coordinates. 虽然我的方法完全正确,但我错过了在实际坐标中计算3D对象方向的事实,我们需要在像素和对象坐标中至少有4个以上的对应点集。

I just had to check for contours in the image that had more than 4 points , and my code works as intended, and I am able to compute the 6-degrees of freedom with accuracy and also by now I have made it more sophisticated and soon I will be able to share a git link to my work. 我只需检查图像中有超过4个点的轮廓,并且我的代码按预期工作,我能够准确地计算6自由度,而且到现在我已经使它更加复杂,很快我将能够分享我工作的git链接。

I would still like to hear about, how to successfully test the correctness of orientation that I am able to compute using the rodrigues formulae. 我仍然想听听,如何成功地测试我能够使用罗德里格斯公式计算的方向的正确性。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM