Does anybody know of a image analysis algorithm with which I can determine how large(approximately, in real-life measurements, let's say width in meters or something) a room is out of one(or multiple) video recordings of this room?
I'm currently using OpenCV as my image library of choice, but I haven't gotten very far in terms of learning image analysis algorithms, just a name drop would be fine.
Thanks
Edit: Okay, a little bit of clarification I just got from the people involved. I basically have no control how the video feed is taken, and can't guarantee that there are multiple data sources. I however have a certain points location in the room and I'm supposed to place something in relation to that point. So I would probably looking at trying to identify the edges of the room, then identifying how far procentual the given location is in the room and then guess how large the room is.
Awfully difficult (yet interesting!) problem.
If you are thinking in doing this in a completely automated way I think you'll have a lots of issues. But I think this is doable if an operator can mark control points in a set of pictures.
Your problem can be stated more generally as finding the distance between two points in 3D space, when you only have the locations of these points in two or more 2D pictures taken from different points of view. The process will work more or less like this:
Pretty easy, huh?
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.