简体   繁体   中英

DirectX 11 - Compute Shader, copy data from the GPU to the CPU

I've just started up using Direct compute in an attempt to move a fluid simulation I have been working on, onto the GPU. I have found a very similar (if not identical) question here however seems the resolution to my problem is not the same as theirs; I do have my CopyResource the right way round for sure! As with the pasted question, I only get a buffer filled with 0's when copy back from the GPU. I really can't see the error as I don't understand how I can be reaching out of bounds limits. I'm going to apologise for the mass amount of code pasting about to occur but I want be sure I've not got any of the setup wrong.

Output Buffer, UAV and System Buffer set up

  outputDesc.Usage = D3D11_USAGE_DEFAULT; outputDesc.BindFlags = D3D11_BIND_UNORDERED_ACCESS; outputDesc.ByteWidth = sizeof(BoundaryConditions) * numElements; outputDesc.CPUAccessFlags = 0; outputDesc.StructureByteStride = sizeof(BoundaryConditions); outputDesc.MiscFlags = D3D11_RESOURCE_MISC_BUFFER_STRUCTURED; result =_device->CreateBuffer(&outputDesc, 0, &m_outputBuffer); outputDesc.Usage = D3D11_USAGE_STAGING; outputDesc.BindFlags = 0; outputDesc.CPUAccessFlags = D3D11_CPU_ACCESS_READ; result = _device->CreateBuffer(&outputDesc, 0, &m_outputresult); D3D11_UNORDERED_ACCESS_VIEW_DESC uavDesc; uavDesc.Format = DXGI_FORMAT_UNKNOWN; uavDesc.ViewDimension = D3D11_UAV_DIMENSION_BUFFER; uavDesc.Buffer.FirstElement = 0; uavDesc.Buffer.Flags = 0; uavDesc.Buffer.NumElements = numElements; result =_device->CreateUnorderedAccessView(m_outputBuffer, &uavDesc, &m_BoundaryConditionsUAV); 

Running the Shader in my frame loop

 HRESULT result; D3D11_MAPPED_SUBRESOURCE mappedResource; _deviceContext->CSSetShader(m_BoundaryConditionsCS, nullptr, 0); _deviceContext->CSSetUnorderedAccessViews(0, 1, &m_BoundaryConditionsUAV, 0); _deviceContext->Dispatch(1, 1, 1); // Unbind output from compute shader ID3D11UnorderedAccessView* nullUAV[] = { NULL }; _deviceContext->CSSetUnorderedAccessViews(0, 1, nullUAV, 0); // Disable Compute Shader _deviceContext->CSSetShader(nullptr, nullptr, 0); _deviceContext->CopyResource(m_outputresult, m_outputBuffer); D3D11_MAPPED_SUBRESOURCE mappedData; result = _deviceContext->Map(m_outputresult, 0, D3D11_MAP_READ, 0, &mappedData); BoundaryConditions* newbc = reinterpret_cast<BoundaryConditions*>(mappedData.pData); for (int i = 0; i < 4; i++) { Debug::Instance()->Log(newbc[i].xx); } _deviceContext->Unmap(m_outputresult, 0); 

HLSL

 struct BoundaryConditions { float3 x; float3 y; }; RWStructuredBuffer<BoundaryConditions> _boundaryConditions; [numthreads(4, 1, 1)] void ComputeBoundaryConditions(int3 id : SV_DispatchThreadID) { _boundaryConditions[id.x].x = float3(id.x,id.y,id.z); } 

I dispatch the Compute shader after I begin a frame and before I end the frame. I have played around with moving the shaders dispatch call outside of the end scene and before the present ect but nothing seems to effect the process. Can't seem to figure this one out!

Holy Smokes I fixed the error! I was creating the compute shader to a different ID3D11ComputeShader pointer! D: Works like a charm! Pheew Sorry and thanks Adam!

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM