简体   繁体   English

将ECEF坐标中的边界框转换为ENU坐标

[英]Convert a bounding box in ECEF coordinates to ENU coordinates

I have a geometry with its vertices in cartesian coordinates. 我有一个几何,其顶点在笛卡尔坐标中。 These cartesian coordinates are the ECEF(Earth centred earth fixed) coordinates. 这些笛卡尔坐标是ECEF(地球中心固定地球)坐标。 This geometry is actually present on an ellipsoidal model of the earth using wgs84 corrdinates.The cartesian coordinates were actually obtained by converting the set of latitudes and longitudes along which the geomtries lie but i no longer have access to them. 这种几何形状实际上是使用wgs84坐标系存在于地球的椭圆体模型上的。笛卡尔坐标实际上是通过转换地理坐标所在的一组纬度和经度而获得的,但我不再可以使用它们。 What i have is an axis aligned bounding box with xmax, ymax, zmax and xmin,ymin,zmin obtained by parsing the cartesian coordinates (There is no obviously no cartesian point of the geometry at xmax,ymax,zmax or xmin,ymin,zmin. The bounding box is just a cuboid enclosing the geometry). 我所拥有的是通过解析笛卡尔坐标获得的具有xmax,ymax,zmax和xmin,ymin,zmin的轴对齐边界框(在xmax,ymax,zmax或xmin,ymin,zmin上显然没有几何的笛卡尔点边界框只是包围几何体的长方体。

What i want to do is to calculate the camera distance in an overview mode such that this geometry's bounding box perfectly fits the camera frustum. 我想做的是在概览模式下计算摄像机距离,以使此几何图形的边界框完全适合摄像机的视锥。

I am not very clear with the approach to take here. 我对采取的方法不太清楚。 A method like using a local to world matrix comes to mind but its not very clear. 我想到了一种类似使用本地到世界矩阵的方法,但是这种方法不是很清楚。 在此处输入图片说明

@Specktre I referred to your suggestions on shifting points in 3D and that led me to another improved solution, nevertheless not perfect. @Specktre我提到了您关于在3D中转换点的建议,这使我想到了另一个改进的解决方案,但是并不完美。

  1. Compute a matrix that can transfer from ECEF to ENU. 计算可以从ECEF转移到ENU的矩阵。 Refer this - http://www.navipedia.net/index.php/Transformations_between_ECEF_and_ENU_coordinates 请参考-http: //www.navipedia.net/index.php/Transformations_between_ECEF_and_ENU_coordinates
  2. Rotate all eight corners of my original bounding box using this matrix. 使用此矩阵旋转原始边界框的所有八个角。
  3. Compute a new bounding box by finding the min and max of x,y,z of these rotated points 通过找到这些旋转点的x,y,z的最小值和最大值来计算新的边界框
  4. compute distance 计算距离
    • cameraDistance1 = ((newbb.ymax - newbb.ymin)/2)/tan(fov/2)
    • cameraDistance2 = ((newbb.xmax - newbb.xmin)/2)/(tan(fov/2)xaspectRatio)
    • cameraDistance = max(cameraDistance1, cameraDistance2)

This time i had to use the aspect ratio along x as i had previously expected since in my application fov is along y. 这次我不得不像以前期望的那样使用x的宽高比,因为在我的应用程序中fov沿着y。 Although this works almost accurately, there is still a small bug i guess. 尽管这几乎可以准确地起作用,但是我猜仍然有一个小错误。 I am not very sure if it a good idea to generate a new bounding box. 我不确定生成一个新的边界框是否是一个好主意。 May be it is more accurate to identify 2 points point1(xmax, ymin, zmax) and point(xmax, ymax, zmax) in the original bounding box, find their values after multiplying with matrix and then do (point2 - point1).length(). 在原始边界框中识别2个点point1(xmax,ymin,zmax)和point(xmax,ymax,zmax),在与矩阵相乘后找到它们的值然后做(point2-point1)。 ()。 Similarly for y. y也是如此。 Would that be more accurate? 那会更准确吗?

  1. transform matrix 变换矩阵

    first thing is to understand that transform matrix represents coordinate system. 首先是要了解变换矩阵代表坐标系。 Look here Transform matrix anatomy for another example. 在此处查看“ 变换矩阵解剖”以获得另一个示例。

    In standard OpenGL notation If you use direct matrix then you are converting from matrix local space ( LCS ) to world global space ( GCS ). 用标准OpenGL表示法如果您使用直接矩阵,那么您正在从矩阵局部空间( LCS )转换为世界全局空间( GCS )。 If you use inverse matrix then you converting coordinates from GCS to LCS 如果使用逆矩阵,则将坐标从GCS转换为LCS

  2. camera matrix 相机矩阵

    camera matrix converts to camera space so you need the inverse matrix. 相机矩阵会转换为相机空间,因此您需要逆矩阵。 You get camera matrix like this: 您将获得像这样的相机矩阵:

     camera=inverse(camera_space_matrix) 

    now for info on how to construct your camera_space_matrix so it fits the bounding box look here: 现在获取有关如何构造你的camera_space_matrix以便适合边界框的信息,请看这里:

    so compute midpoint of the top rectangle of your box compute camera distance as max of distance computed from all vertexes of box so 因此,计算框顶部矩形的midpoint ,将相机距离计算为从框所有顶点计算的distance最大值,因此

     camera position = midpoint + distance*midpoint_normal 

    orientation depends on your projection matrix. 方向取决于您的投影矩阵。 If you use gluPerspective then you are viewing -Z or +Z according selected glDepthFunc . 如果使用gluPerspective则根据所选的glDepthFunc查看-Z或+ Z。 So set Z axis of matrix to normal and Y,X vectors can be aligned to North/South and East/West so for example 因此,将矩阵的Z轴设置为法线, Y,X向量可以与北/南和东/西对齐,例如

     Y=Z x (1,0,0) X = Z x Y 

    now put position, and axis vectors X,Y,Z inside matrix, compute inverse matrix and that it is. 现在将位置和轴矢量X,Y,Z放入矩阵中,计算出逆矩阵即为逆矩阵。

相机空间

[Notes] [笔记]

Do not forget that FOV can have different angles for X and Y axis (aspect ratio). 不要忘记, FOV的X和Y轴角度可以不同(长宽比)。

Normal is just midpoint - Earth center which is (0,0,0) so normal is also the midpoint. 法线只是midpoint -地球中心为(0,0,0),所以法线也是中点。 Just normalize it to size 1.0 . 只需将其规格化为1.0

For all computations use cartesian world GCS (global coordinate system). 对于所有计算,请使用笛卡尔世界GCS (全局坐标系)。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM