简体   繁体   中英

Dense pixelwise reverse projection

I saw a question on reverse projecting 4 2D points to derive the corners of a rectangle in 3D space. I have a kind of more general version of the same problem:

Given either a focal length (which can be solved to produce arcseconds / pixel) or the intrinsic camera matrix (a 3x2 matrix that defines the properties of the pinhole camera model being used - it's directly related to focal length), compute the camera ray that goes through each pixel.

I'd like to take a series of frames, derive the candidate light rays from each frame, and use some sort of iterative solving approach to derive the camera pose from each frame (given a sufficiently large sample, of course)... All of that is really just massively-parallel implementations of a generalized Hough algorithm... it's getting the candidate rays in the first place that I'm having the problem with...

A friend of mine found the source code from a university for the camera matching in PhotoSynth. I'd Google around for it, if I were you.

这是一个很好的建议……我一定会研究一下(光合成技术使我对这个主题不再产生兴趣-但是我已经为robochamps进行了几个月的研究)-但这是一个稀疏的实现,它看起来“很不错”。特征(在同一图像的其他视图中应易于识别的图像中的点),尽管我当然计划根据匹配的特征有多好对每个匹配进行评分,但我希望全密度算法能够得出每个像素...还是我应该说体素大声笑?

After a little poking around, isn't it the extrinsic matrix that tells you where the camera actually is in 3-space?

I worked at a company that did a lot of this, but I always used the tools that the algorithm guys wrote. :)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM