简体   繁体   English

OpenGL如何使用GLM正确计算相机矩阵

[英]OpenGL how to properly calculate the camera matrix with GLM

I am working to an OpenGL graphic engine and I'm experiencing a very odd issue. 我正在使用OpenGL图形引擎,但遇到一个非常奇怪的问题。 Basically I'm importing (through Assimp) a .DAE scene (made in Cinema4D) which also contains a Camera. 基本上,我要通过Assimp导入一个也包含Camera的.DAE场景(由Cinema4D制作)。 The camera is in the origin and rotated 20 degrees to the left and 20 degrees up, so that a section of the cube should appear in the lower right corner of the viewport. 摄像头位于原点,向左旋转20度,向上旋转20度,因此立方体的一部分应显示在视口的右下角。

When rendering I first calculate the "global" lookAt matrix, by applying the world transformation matrix of the camera node within the scene graph to the lookAt matrix: 渲染时,我首先通过将场景图中镜头节点的世界转换矩阵应用于lookAt矩阵来计算“全局” lookAt矩阵:

cameraMatrix = transform * glm::lookAt(camera->position, camera->lookAt, camera->upward);

and then use it to calculate the final meshes' modelview matrices: 然后使用它来计算最终网格物体的模型视图矩阵:

// mesh.second is the world matrix
mat4 modelvMatrix = renderList->cameraMatrix * mesh.second;

which is then combined with the projection matrix and fed to the shader. 然后将其与投影矩阵组合并馈入着色器。 However the result (textures are not working yet) seems like "mirrored", as if the transformations were applied conversely: 但是结果(纹理尚不可用)看起来像是“镜像的”,仿佛相反地应用了转换:

Doing some math manually using the same transformation matrix : 使用相同的转换矩阵手动进行一些数学运算:

//cameraMatrix = transform * glm::lookAt(camera->position, camera->lookAt, camera->upward);
cameraMatrix = camera->getCameraMatrix(transform);

mat4 Camera::getCameraMatrix(mat4p transform)
{
    auto invTr = glm::inverseTranspose(mat3(transform));
    auto pos = vec3(transform * vec4(position, 1));
    auto dir = invTr * glm::normalize(lookAt - position);
    auto upw = invTr * upward;
    return glm::lookAt(pos, pos + dir, upw);
}

seems to solve the problem: 似乎解决了问题:

However I am not sure if the output is entirely right because it is not perfectly specular to the first image. 但是我不确定输出是否完全正确,因为它对第一张图像的反射不是很完美。 The local transformation matrix of the camera node is: 摄影机节点的局部变换矩阵为:

mat4x4(
    (0.939693,  0.000000, -0.342020, 0.000000),
    (0.116978,  0.939693,  0.321394, 0.000000),
    (0.321394, -0.342020,  0.883022, 0.000000),
    (0.000000, -0.000000,  0.000000, 1.000000))

How should I properly calculate the camera matrix? 如何正确计算相机矩阵?

EDIT 编辑

I've been asking about the calculus of the matrices: 我一直在问矩阵的微积分:

        mat4 modelvMatrix = renderList->cameraMatrix * mesh.second;
        mat4 renderMatrix = projectionMatrix * modelvMatrix;
        shaderProgram->setMatrix("renderMatrix", renderMatrix);
        mesh.first->render();

and the shader code: 和着色器代码:

const std::string Source::VertexShader= R"(
    #version 430 core

    layout(location = 0) in vec3 position;
    layout(location = 1) in vec3 normal;
    layout(location = 2) in vec2 vertexTexCoord;

    uniform mat4 renderMatrix;

    out vec2 texCoord;

    void main()
    {
        gl_Position = renderMatrix * vec4(position, 1.0);
        texCoord = vertexTexCoord;
    }
)";


const std::string Source::FragmentShader= R"(
    #version 430 core

    uniform sampler2D sampler;

    in vec2 texCoord;

    out vec3 color;

    void main()
    {
        color = vec3(0.0, 1.0, 0.0);
        //color = texture(sampler, texCoord);
    }
)";

First, this is wrong: 首先,这是错误的:

cameraMatrix = transform * glm::lookAt(camera->position, camera->lookAt, camera->upward);

The correct order is as follows: 正确的顺序如下:

MVP = P * V * M; MVP = P * V * M;

Where P,V,M are projection, view and model matrices respectively. 其中P,V,M分别是投影矩阵,视图矩阵和模型矩阵。

Also,that expression doesn't make sense because your glm::lookAt already calculates lookAt matrix based on camera's transform.(if we imply that your 'transform' is camera's model matrix) 而且,该表达式没有意义,因为您的glm :: lookAt已经基于相机的变换计算了lookAt矩阵。(如果我们暗示您的“变换”是相机的模型矩阵)

Now, regarding glm::lookAt() . 现在,关于glm::lookAt() Don't use it to get view (camera) matrix. 不要使用它来获取视图(相机)矩阵。 While it does return you a matrix that is oriented in a direction you specified, this is not going to be a correct view matrix because the eye position (translation part of this matrix) is not inversed as it should in the case of view matrix. 虽然它确实会返回一个以您指定的方向定向的矩阵,但它不会成为正确的视图矩阵,因为在视图矩阵的情况下,眼睛的位置(该矩阵的平移部分)不会反转。

The simpliest way to get a correct view matrix is to inverse its model matrix. 获得正确视图矩阵的最简单方法是将其模型矩阵求逆。

glm::mat4 V = glm::inverse(M);

That's it. 而已。 Now you can fetch your 'V' into shader or calculate MVP matrix on CPU. 现在,您可以将“ V”提取到着色器中或在CPU上计算MVP矩阵。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM