简体   繁体   中英

Get 3D coordinates in OpenCV having X,Y and distance to object

I am trying to convert X,Y position of a tracked object in an image to 3D coordinates.

I got the distance to the object based on the size of the tracked object (A marker) but now I need to convert all of this to a 3D coordinate in the space. I have been reading a lot about this but all of the methods I found require a calibration matrix to achieve this.

In my case I don't need a lot of precision but I need this to work with multiple cameras without calibration. Is there a way to achieve what I'm trying to do?

The "without calibration" bit dooms you, sorry.

Without knowing the focal length (or, equivalently, the field of view) you cannot "convert" a pixel into a ray.

Note that you can sometimes get an approximate calibration directly from the camera - for example, it might write a focal length for its lens into the EXIF header of the captured images.

If you're using some sort of micro controller, it may be possible to point a sensor towards that object that's seen through the camera to get the distance.

You would most likely have to have a complex algorithm to get multiple cameras to work together to return the distance. If there's no calibration, there would be no way for those cameras to work together, as Francesco said.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM