简体   繁体   中英

DirectX Z buffer not working correctly

My problem centers around the Depth test in DirectX 9. I'm trying to render one object at a static point in the game and another at a point that can be changed based on input from DirectInput. One of the objects is drawn almost 500 units away. Upon first drawing the scene, the two objects render correctly, but as soon as the position of the second one changes (also moving the camera) the further object is drawn over the closer one, despite there being an enabled depth test and correct near and far plane values. this problem has had me poring over forums (this one included) to find an answer with no luck. If anyone can help it would be much appreciated.

EDIT:

Maybe I was a little unclear. The objects are drawn correctly before any changes to position are made, but the camera direction can be changed at will and no ill effects ensue. Once the position is changed, the objects are rendered incorrectly.

Updates:

@paul-jan: I would post code but its scattered throughout several classes... The depth buffer is just a D3DFMT_D16 Auto buffer declared in D3DPRESENT_PARAMETERS. According to the MSDN docs, its automatically enabled, writes are enabled, and the function is D3DCMP_LESSEQUAL. Also the models are loaded from X files, and they display fine in the DirectX Viewer.

@vines: Interesting hypothesis. The transformations are different, but only in the world matrix. My gut instinct is that its in the view matrix. I'm working off of an implementation described in the book Introduction to 3D Game Programming with DirectX 9.0c by Frank D. Luna that I've modified to suit my purposes, namely three axis rotation. However, I may have messed up my implementation to it.

Here's some code...

void BuildViewMatrix()
{    
    D3DXVec3Normalize(&f,&f);
    D3DXVec3Cross(&u,&f,&r);
    D3DXVec3Normalize(&u,&u);
    D3DXVec3Cross(&r,&u,&f);
    D3DXVec3Normalize(&r,&r);       
    w(0,0)=r.x;
    w(0,1)=r.y;
    w(0,2)=r.z;
    w(0,3)=p.x;
    w(1,0)=u.x;
    w(1,1)=u.y;
    w(1,2)=u.z;
    w(1,3)=p.y;
    w(2,0)=f.x;
    w(2,1)=f.y;
    w(2,2)=f.z;
    w(2,3)=p.z;
    w(3,0)=0.0f;
    w(3,1)=0.0f;
    w(3,2)=0.0f;
    w(3,3)=1.0f;
    D3DXMatrixInverse(&view,0,&w);
    w=l*w;
}

r=right vector, u=up vector, f=forward vector, p=position, w=world matrix, l=Local Space Offset Transformation (my own idea for translating things slightly before a world transform).

The whole idea is based off of a camera for a third person star fighter combat game. The camera and your ship are "joined" as one object (hence the local space transform), and the view matrix is the inverse of the world matrix for the ship. I'm not sure if this is the best way to implement that, though.

Any ideas?

A guess:

Seems like the matrices for the two objects differ: you render first object with one transformation setup, then change it and render the 2nd object. Then, visually 2nd object gets projected where it is expected to be, but the z-coordinate scale is now distorted regarding to what was used for the 1st object. Thus you "fool" the Z-buffer — you paint the right pixels, but project them nearer than supposed to.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM