简体   繁体   中英

D3D11 Coordinate System

Some of my old code has ended up with a bunch of nasty hacks to get things to work "correctly", in terms of moving objects around and the camera, such as having to take "std::sin(-yaw)" rather than "std::sin(yaw)" when implementing equations found elsewhere online, and such and has generally just made everything confusing to the point of trail and error in many cases.

  1. Working with D3D11 and the DirectXMath stuff (so left hand coordinates and row major?), what exactly is the intended coordinate system eg assuming the camera is at the origin and looking along the yellow vector in the image with no rotation, are the labels correct?.

    在此处输入图片说明

    And then given that, and a camera described by (x,y,z) and pitch (y-axis mouse/control), yaw (x-axis mouse/control), and assuming there is not some other way I should be doing even that...

  2. What is the correct function to get the view matrix (currently im multiplying a translate and 2 rotate matrices, multiplying with the projection, then any world matrices for the object in question, and transposing the result to use as a single shader constant).

  3. What is the equation to get the vector the camera is looking along (current multiplying a (0,0,1) vector by the matrix from 2).

  4. ... and the "up" and "right" vectors (since even if not using a lookat matrix view function, seems frustum culling needs to know those). Again currently multiplying with the matrix from 2.
  5. Calculate correct pitch and yaw scalars/components from a direction vector (eg for a turret with separate pitch/yaw joints).

EDIT: Code samples:

//Moving a floating object with only yaw forwards (moveX,moveY,moveZ).
//Negative yaw seems wrong?
auto c = std::cosf(-yaw);
auto s = std::sinf(-yaw);
pos.x += moveX * c - moveZ * s;
pos.y += moveY;
pos.z += moveX * s + moveZ * c;

//Gets the vector the camera is looking along
//This time yaw is positive, but pitch is negative?
float c = std::cos(-pitch);
Vector3F facing(
    c * std::sinf(yaw),
    std::sinf(-pitch),
    c * std::cosf(yaw));

//Creating the view transform matrix, everything is negative
XMMATRIX xmviewrot;
xmviewrot = XMMatrixRotationY(-yaw);
xmviewrot*= XMMatrixRotationX(-pitch);
XMMATRIX xmview;
xmview = XMMatrixTranslation(-x, -y, -z);
xmview *= xmviewrot;
XMStoreFloat4x4A(&view, xmview);

//Other vectors needed for frustum culling
XMVECTOR xmup = XMVector3Transform(XMLoadFloat4A(&UP), xmview);
XMVECTOR xmright = XMVector3Transform(XMLoadFloat4A(&RIGHT), xmview);

//Matrix for stuff that is already in world space (e.g. terrain)
XMMATRIX xmviewProj = xmview * xmproj;
//Apparently needs transposing before use on the GPU...
XMStoreFloat4x4A(&constants.transform, XMMatrixTranspose(xmviewProj));

//In the shaders
output.pos = mul(input.pos, transform);

//vertex positions for an upwards facing square with triangle strip
v0 = (x1, y, z1);
v1 = (x1, y, z2);
v2 = (x2, y, z2);
v3 = (x2, y, z1);

So seems to me Ive done something fundamentally wrong here to need the -yaw and +yaw, -pitch and +pitch in different places? And some of those functions I ended up doing with trail and error to get that bit right, online samples didnt use negative.

There is no intended coordinate system with Direct3D 11, you can use whatever you want. From Chuck Walbourn blog entry on Getting Started with Direct3D 11 :

Math: Since Direct3D 11 does not have the 'fixed-function' graphics pipeline of Direct3D 9, the choice of graphics math conventions (left-handed vs. right-handed, row-major vs. column-major matrices, etc.) is entirely up to the developer. DirectXMath can be used to create both Direct3D-style "Left-Hand Coordinate" transformations as well as OpenGL-style "Right-Handed Coordinate" transformations using a row-major matrix convention which can be used directly with row-major shaders or transposed to use column-major shaders.

Your shaders determine what coordinate system they expect. Ultimately they must provide vertices to the rasterizer stage in homogeneous clip space, which Direct3D 11 defines as:

Vertices (x,y,z,w), coming into the rasterizer stage are assumed to be in homogeneous clip-space. In this coordinate space the X axis points right, Y points up and Z points away from camera.

Accordingly the answers to your other questions depend on what coordinate system you choose for your project. The DirectXMath library has number of functions can calculate appropriate matrices for you. The older D3DX library documentation shows the math used to calculate these matrices.

Your other questions aren't very clear, but they seem to about not understanding how matrices are used to transform vertices. You might want to look at the old Direct3D 9 documentation which describes how and why vertices are used in the fixed function pipeline and gives a good introduction to these topics.

I have recently been re-introducing myself back to DirectX (version 12) again after a long period of absence from the Microsoft development environment.

I noticed that although I had set everything up correctly I was getting strange and unpredictable results when performing geometric transformations. Just like yourself I was sending the MODEL VIEW PERSPECTIVE matrices to the vertex shader using a constant buffer.

In order to correct the problem I had to use XMMatrixTranspose() for each of the MODEL VIEW PERSPECTIVE matrices. Like this...

m_constantBufferData.world = XMMatrixTranspose(worldMatrix);
m_constantBufferData.view = XMMatrixTranspose(viewMatrix);
m_constantBufferData.projection = XMMatrixTranspose(projectionMatrix);

It seems that the shaders use the opposite column-major (or row-major?) form than the DirectXMath Library Functions.

I was surprised to learn this initially.

For completeness here is my shader code...

cbuffer SceneConstantBuffer : register(b0)
{
    float4 offset;
    matrix world;
    matrix view;
    matrix projection;
};


struct VS_INPUT
{
    float4 Pos : POSITION;
    float4 Color : COLOR;
};

struct PS_INPUT
{
    float4 Pos : SV_POSITION;
    float4 Color : COLOR;
};


PS_INPUT VSMain(VS_INPUT input)
{
    PS_INPUT output;
    output.Pos = mul(input.Pos, world);
    output.Pos = mul(output.Pos, view);
    output.Pos = mul(output.Pos, projection);
    output.Color = input.Color;

    return output;
}

float4 PSMain(PS_INPUT input) : SV_TARGET
{
    return input.Color;
}

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM