简体   繁体   中英

How to transform an image based on the position of camera

I'm trying to create a perspective projection of an image based on the look direction. I'm unexperienced on this field and can't manage to do that myself, however. Will you help me, please?

There is an image and an observer (camera). If camera can be considered an object on an invisible sphere and the image a plane going through the middle of the sphere, then camera position can be expressed as:

x = d cos(θ) cos(φ)

y = d sin(θ)

z = d sin(φ) cos(θ)

Where θ is latitude, φ is longitude and d is the distance (radius) from the middle of the sphere where the middle of the image is.

I found these formulae somwhere, but I'm not sure about the coordinates (I don't know but it looks to me that x should be z but I guess it depends on the coordinate system).

Now, what I need to do is make a proper transformation of my image so it looks as if viewed from the camera (in a proper perspective). Would you be so kind to tell me a few words how this could be done? What steps should I take?

I'm developing an iOS app and I thought I could use the following method from the QuartzCore. But I have no idea what angle I should pass to this method and how to derive the new x, y, z coordinates from the camera position.

CATransform3D CATransform3DRotate (CATransform3D t, CGFloat angle,
    CGFloat x, CGFloat y, CGFloat z)

So far I have successfully created a simple viewing perspective by:

  1. using an identity matrix (as the CATransform3D parameter) with .m34 set to 1/-1000,
  2. rotating my image by the angle of φ with the (0, 1, 0) vector,
  3. concatenating the result with a rotation by θ and the (1, 0, 0) vector,
  4. scaling based on the d is ignored (I scale the image based on some other criteria).

But the result I got was not what I wanted (which was obvious) :-/. The perspective looks realistic as long as one of these two angles is close to 0. Therefore I thought there could be a way to calculate somehow a proper angle and the x, y and z coordinates to achieve a proper transformation (which might be wrong because it's just my guess).

I think I managed to find a solution, but unfortunately based on my own calculations, thoughts and experiments, so I have no idea if it is correct. Seems to be OK, but you know...

So if the coordinate system is like this: 在此处输入图片说明

and the plane of the image to be transformed goes through the X and the Y axis, and its centre is in the origin of the system, then the following coordinates:

x = d sin(φ) cos(θ)

y = d sin(θ)

z = d cos(θ) cos(φ)

define a vector that starts in the origin of the coordinate system and points to the position of the camera that is observing the image. The d can be set to 1 so we get a unit vector at once without further normalization. Theta is the angle in the ZY plane and phi is the angle in the ZX plane. Theta raises from 0° to 90° from the Z+ to the Y+ axis, whereas phi raises from 0° to 90° from the Z+ to the X+ axis (and to -90° in the opposite direction, in both cases).

Hence the transformation vector is:

x1 = -y / z

y1 = -x / z

z1 = 0.

I'm not sure about z1 = 0 , however rotation around the Z axis seemed wrong to me.

The last thing to calculate is the angle by which the image has to be transformed. In my humble opinion this should be the angle between the vector that points to the camera (x, y, z) and the vector normal to the image, which is the Z axis (0, 0, 1).

The dot product of two vectors gives the cosine of the angle between them, so the angle is:

α = arccos(x * 0 + y * 0 + z * 1) = arccos(z).

Therefore the alpha angle and the x1, y1, z1 coordinates are the parameters of CATransform3DRotate method I mentioned in my question.

I would be grateful if somebody could tell me if this approach is correct. Thanks a lot!

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM