简体   繁体   中英

Kinect V2 - How to transform kinect v2 coordinate to real life measurements?

I am using kinect v2. I want to turn kinect skeleton coordinates into real life measurements. I have read here: Understanding Kinect V2 Joints and Coordinate System

As I understand, If the coordinates are 0.3124103X,0.5384778Y,2.244482Z, that means I am 0.3 meters left, 0.5 meters above and 2.24 meters in front of the sensor. These coordinates are the coordinates of my head and sensor is 0.5 meter above the ground. My height is 1 meters? or Am I doing something wrong? Is there an optimal position or height to calculate it better? Maybe there is a different method to calculate it? Anybody knows how to do it? Thank you :)

You need to account for the tilt of the sensor.

In your example, your calculation is correct if the sensor is facing exactly forward. If the Kinect is tilted up, your height would be higher.

You can calculate the tilt of the sensor and the height of the sensor using BodyFrame.FloorClipPlane .

Then you need to transform the joint coordinates from Kinect's camera space to the real-world xyz coordinates.

See the marked answer at this post "FloorClipPlane & Joint Data correlation" by Eddy Escardo-Raffo [MSFT]

What you need to do is a coordinate transform from the cartesian space defined by the basis vectors of the Kinect's point of view (let's call them KV) into the cartesian space defined by the desired basis vectors (let's call these DV).

When camera is not tilted at all, KV and DV are exactly the same so, since this is the desired vector space, for simplicity we can use the standard unit vectors to represent the axes:

x: [1, 0, 0]

y: [0, 1, 0]

z: [0, 0, 1]

Now, when you tilt camera upwards by an angle A, x axis stays the same but yz plane rotates by A (ie: it corresponds exactly to a counter-clockwise rotation about X axis), so the basis vectors for KV (in terms of the basis vectors for DV) are now

x: [1, 0, 0]

y: [0, cos A, -sin A]

z: [0, sin A, cos A]

to convert coordinates relative to KV into coordinates relative to DV, you have to perform a matrix multiplication between the transformation matrix defined by these basis vectors for KV ( http://en.wikipedia.org/wiki/Transformation_matrix ) and a joint position vector that you receive from kinect API. This will result in a joint position vector relative to DV.


You may also find this answer helpful: Sergio's answer to "Transform world space using Kinect FloorClipPlane to move origin to floor while keeping orientation"

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM