简体   繁体   English

OpenCV unproject 2D指向具有已知深度“Z”的3D

[英]OpenCV unproject 2D points to 3D with known depth `Z`

Problem statement 问题陈述

I am trying to reproject 2D points to their original 3D coordinates, assuming I know the distance at which each point is. 我正在尝试将2D点重新投影到其原始3D坐标,假设我知道每个点的距离。 Following the OpenCV documentation , I managed to get it to work with zero-distortions. OpenCV文档之后 ,我设法让它与零失真一起工作。 However, when there are distortions, the result is not correct. 但是,当存在扭曲时,结果不正确。

Current approach 目前的做法

So, the idea is to reverse the following: 所以,想法是扭转以下现象:

投影扭曲

into the following: 进入以下内容:

在此输入图像描述

By: 通过:

  1. Geting rid of any distortions using cv::undistortPoints 使用cv::undistortPoints摆脱任何扭曲
  2. Use intrinsics to get back to the normalized camera coordinates by reversing the second equation above 使用内在函数通过反转上面的第二个等式来回到标准化的摄像机坐标
  3. Multiplying by z to reverse the normalization. 乘以z来反转归一化。

Questions 问题

  1. Why do I need to subtract f_x and f_y to get back to the normalized camera coordinates (found empirically when testing)? 为什么我需要减去f_xf_y以回到标准化的摄像机坐标(在测试时凭经验找到)? In the code below, in step 2, if I don't subtract -- even the non-distorted result is off This was my mistake -- I messed up the indexes. 在下面的代码中,在第2步中,如果我不减去 - 即使非失真结果已关闭这是我的错误 - 我弄乱了索引。
  2. If I include the distortion, the result is wrong -- what am I doing wrong? 如果我包含失真,结果是错误的 - 我做错了什么?

Sample code (C++) 示例代码(C ++)

#include <iostream>
#include <opencv2/calib3d/calib3d.hpp>
#include <opencv2/core/core.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <vector>

std::vector<cv::Point2d> Project(const std::vector<cv::Point3d>& points,
                                 const cv::Mat& intrinsic,
                                 const cv::Mat& distortion) {
  std::vector<cv::Point2d> result;
  if (!points.empty()) {
    cv::projectPoints(points, cv::Mat(3, 1, CV_64F, cvScalar(0.)),
                      cv::Mat(3, 1, CV_64F, cvScalar(0.)), intrinsic,
                      distortion, result);
  }
  return result;
}

std::vector<cv::Point3d> Unproject(const std::vector<cv::Point2d>& points,
                                   const std::vector<double>& Z,
                                   const cv::Mat& intrinsic,
                                   const cv::Mat& distortion) {
  double f_x = intrinsic.at<double>(0, 0);
  double f_y = intrinsic.at<double>(1, 1);
  double c_x = intrinsic.at<double>(0, 2);
  double c_y = intrinsic.at<double>(1, 2);
  // This was an error before:
  // double c_x = intrinsic.at<double>(0, 3);
  // double c_y = intrinsic.at<double>(1, 3);

  // Step 1. Undistort
  std::vector<cv::Point2d> points_undistorted;
  assert(Z.size() == 1 || Z.size() == points.size());
  if (!points.empty()) {
    cv::undistortPoints(points, points_undistorted, intrinsic,
                        distortion, cv::noArray(), intrinsic);
  }

  // Step 2. Reproject
  std::vector<cv::Point3d> result;
  result.reserve(points.size());
  for (size_t idx = 0; idx < points_undistorted.size(); ++idx) {
    const double z = Z.size() == 1 ? Z[0] : Z[idx];
    result.push_back(
        cv::Point3d((points_undistorted[idx].x - c_x) / f_x * z,
                    (points_undistorted[idx].y - c_y) / f_y * z, z));
  }
  return result;
}

int main() {
  const double f_x = 1000.0;
  const double f_y = 1000.0;
  const double c_x = 1000.0;
  const double c_y = 1000.0;
  const cv::Mat intrinsic =
      (cv::Mat_<double>(3, 3) << f_x, 0.0, c_x, 0.0, f_y, c_y, 0.0, 0.0, 1.0);
  const cv::Mat distortion =
      // (cv::Mat_<double>(5, 1) << 0.0, 0.0, 0.0, 0.0);  // This works!
      (cv::Mat_<double>(5, 1) << -0.32, 1.24, 0.0013, 0.0013);  // This doesn't!

  // Single point test.
  const cv::Point3d point_single(-10.0, 2.0, 12.0);
  const cv::Point2d point_single_projected = Project({point_single}, intrinsic,
                                                     distortion)[0];
  const cv::Point3d point_single_unprojected = Unproject({point_single_projected},
                                    {point_single.z}, intrinsic, distortion)[0];

  std::cout << "Expected Point: " << point_single.x;
  std::cout << " " << point_single.y;
  std::cout << " " << point_single.z << std::endl;
  std::cout << "Computed Point: " << point_single_unprojected.x;
  std::cout << " " << point_single_unprojected.y;
  std::cout << " " << point_single_unprojected.z << std::endl;
}

Same Code (Python) 相同代码(Python)

import cv2
import numpy as np

def Project(points, intrinsic, distortion):
  result = []
  rvec = tvec = np.array([0.0, 0.0, 0.0])
  if len(points) > 0:
    result, _ = cv2.projectPoints(points, rvec, tvec,
                                  intrinsic, distortion)
  return np.squeeze(result, axis=1)

def Unproject(points, Z, intrinsic, distortion):
  f_x = intrinsic[0, 0]
  f_y = intrinsic[1, 1]
  c_x = intrinsic[0, 2]
  c_y = intrinsic[1, 2]
  # This was an error before
  # c_x = intrinsic[0, 3]
  # c_y = intrinsic[1, 3]

  # Step 1. Undistort.
  points_undistorted = np.array([])
  if len(points) > 0:
    points_undistorted = cv2.undistortPoints(np.expand_dims(points, axis=1), intrinsic, distortion, P=intrinsic)
  points_undistorted = np.squeeze(points_undistorted, axis=1)

  # Step 2. Reproject.
  result = []
  for idx in range(points_undistorted.shape[0]):
    z = Z[0] if len(Z) == 1 else Z[idx]
    x = (points_undistorted[idx, 0] - c_x) / f_x * z
    y = (points_undistorted[idx, 1] - c_y) / f_y * z
    result.append([x, y, z])
  return result

f_x = 1000.
f_y = 1000.
c_x = 1000.
c_y = 1000.

intrinsic = np.array([
  [f_x, 0.0, c_x],
  [0.0, f_y, c_y],
  [0.0, 0.0, 1.0]
])

distortion = np.array([0.0, 0.0, 0.0, 0.0])  # This works!
distortion = np.array([-0.32, 1.24, 0.0013, 0.0013])  # This doesn't!

point_single = np.array([[-10.0, 2.0, 12.0],])
point_single_projected = Project(point_single, intrinsic, distortion)
Z = np.array([point[2] for point in point_single])
point_single_unprojected = Unproject(point_single_projected,
                                     Z,
                                     intrinsic, distortion)
print "Expected point:", point_single[0]
print "Computed point:", point_single_unprojected[0]

The results for zero-distortion (as mentioned) are correct: 零失真的结果(如上所述)是正确的:

Expected Point: -10 2 12
Computed Point: -10 2 12

But when the distortions are included, the result is off: 但是当包含扭曲时,结果是关闭的:

Expected Point: -10 2 12
Computed Point: -4.26634 0.848872 12

Update 1. Clarification 更新1.澄清

This is a camera to image projection -- I am assuming the 3D points are in the camera-frame coordinates. 这是一个用于图像投影的相机 - 我假设3D点位于相机框架坐标中。

Update 2. Figured out the first question 更新2.找出第一个问题

OK, I figure out the subtraction of the f_x and f_y -- I was stupid enough to mess up the indexes. 好吧,我弄清楚了f_xf_y的减法 - 我愚蠢到足以弄乱索引。 Updated the code to correct. 更新了要更正的代码。 The other question still holds. 另一个问题仍然存在。

Update 3. Added Python equivalent code 更新3.添加了Python等效代码

To increase visibility, adding the Python codes, because it has the same error. 要提高可见性,请添加Python代码,因为它具有相同的错误。

Answer to Question 2 回答问题2

I found what the problem was -- The 3D point coordinates matter ! 我发现了问题所在 - 3D点坐标很重要 I assumed that no matter what 3D coordinate points I choose, the reconstruction would take care of it. 我假设无论我选择哪个3D坐标点,重建都会照顾它。 However, I noticed something strange: when using a range of 3D points, only a subset of those points were reconstructed correctly. 然而,我注意到一些奇怪的事情:当使用一系列3D点时,只有这些点的子集才能正确重建。 After further investigation, I found out that only the images that are in the field of view of the camera would be properly reconstructed. 经过进一步调查,我发现只有相机视野范围内的图像才能正确重建。 The field-of-view is the function of the intrinsic parameters (and vice-versa). 视野是内在参数的函数(反之亦然)。

For the above codes to work, try setting the parameters as follows (intrinsics are from my camera): 要使上述代码起作用,请尝试按如下方式设置参数(内在函数来自我的相机):

...
const double f_x = 2746.;
const double f_y = 2748.;
const double c_x = 991.;
const double c_y = 619.;
...
const cv::Point3d point_single(10.0, -2.0, 30.0);
...

Also, don't forget that in camera coordinates negative y coordinates is UP :) 另外,不要忘记在相机坐标负y坐标是UP :)

Answer to Question 1: 对问题1的回答:

There was a bug where I was trying to access the intrinsics using 有一个错误,我试图访问内在函数使用

...
double f_x = intrinsic.at<double>(0, 0);
double f_y = intrinsic.at<double>(1, 1);
double c_x = intrinsic.at<double>(0, 3);
double c_y = intrinsic.at<double>(1, 3);
...

But intrinsic was a 3x3 matrix. intrinsic是一个3x3矩阵。

Moral of the story Write unit tests!!! 故事的道德写单元测试!!!

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM