简体   繁体   中英

Converting DirectX11 ID3D11Texture2D from Shader into OpenCV IplImage

Short introduction : I have written an Augmented Reality Application with the Oculus Rift in C++ (DirectX). One of my fragment shaders computes the undistortion for a omnidirectional camera model.

The only Problem I have now is to read out the rendered undistortion texture2D and convert it further for 3DPose Tracking/Mapping of the real world using OpenCV. The funny Thing is, I have already done the other way around, which means I have already created shader resource views with my distorted camera imageBuffers for undistortion. But somehow, I am stuck now at the other way around.

Here is the code where I am stuck at:

void GraphicsAPI::UndistortionForLsdSlam()
{
  // ------------------------------------ [ Version 4 ] 
  // Get Pointer to the rendered Shader Resource (Camera Undistortion)
  ID3D11Resource* renderBuffer;
  renderTextureRight_->GetRenderTargetView()->GetResource(&renderBuffer);

  D3D11_TEXTURE2D_DESC texDesc;
  texDesc.ArraySize = 1;
  texDesc.BindFlags = 0;
  texDesc.CPUAccessFlags = D3D11_CPU_ACCESS_READ;
  texDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
  texDesc.Width = screenWidth_;
  texDesc.Height = screenHeight_;
  texDesc.MipLevels = 1;
  texDesc.MiscFlags = 0;
  texDesc.SampleDesc.Count = 1;
  texDesc.SampleDesc.Quality = 0;
  texDesc.Usage = D3D11_USAGE_STAGING;

  // Copy Data from GPU only, into CPU Accessable Memory 
  ID3D11Texture2D* undistortedShaderTex;
  device_->CreateTexture2D(&texDesc, 0, &undistortedShaderTex);
  devicecontext_->CopyResource(undistortedShaderTex, renderBuffer);

  // SaveDDSTextureToFile(devicecontext_, undistortedShaderTex, L"SCREENSHOT.dds");

  // Map Resource GPU Resource
  D3D11_MAPPED_SUBRESOURCE mappedResource;
  if (FAILED(devicecontext_->Map(undistortedShaderTex, 0, D3D11_MAP_READ, 0, &mappedResource)))
    std::cout << "Error: [CAM 2] could not Map Rendered Camera ShaderResource for Undistortion" << std::endl;

  // Copy Memory of GPU Memory Layout (swizzled) to CPU
  char* buffer = new char[(screenWidth_ * screenHeight_ * CAMERA_CHANNELS)];
  char* mappedData = static_cast<char*>(mappedResource.pData);
  for (UINT i = 0; i < screenHeight_; i++)
  {
    memcpy(buffer, mappedData, screenWidth_ * 4);
    mappedData += mappedResource.RowPitch;
    buffer += screenWidth_ * 4;
  }
  std::cout << "FINISHED LOOP .. " << std::endl;
  devicecontext_->Unmap(undistortedShaderTex, 0);

  // OpenCV IplImage Convertion
  IplImage* frame = cvCreateImageHeader(cvSize(screenWidth_, screenHeight_), IPL_DEPTH_8U, CAMERA_CHANNELS);
  frame->imageData = (char*)buffer;
  frame->imageDataOrigin = frame->imageData;
  cv::Mat mymat = cv::Mat(frame, true); // Access Violation here(!)
  cv::imshow("Shader Undistortion", mymat);

Error Msg:

Unhandled exception at 0x680EF41C (msvcr120.dll) in ARift.exe: 0xC0000005: Access violation reading location 0x1FD4EFFC

The Access Violation happens exactly at when trying to create a cv::Mat() with the previously created IplFrame. For testing purposes, I have changed one line of code to test the OpenCV conversion with my distorted camera buffer and it worked:

frame->imageData = (char*)buffer;

to (reading camera Frame from distorted camera char* Memory buffer):

frame->imageData = (char*)(ariftcontrol->caminput->lsdslamCameraBuffer);

So this means obviously, that something with the char* buffer is not quite correct but after hours of testing I can't see why. First I thought the Problem maybe was because both of my Cameras are R8G8B8A8_UNORM Format and the RenderTargetTexture was set to DXGI_FORMAT_R32G32B32A32_FLOAT (which gave me a DirectX Debug error msg). But I already changed the RenderTargetTexture to DXGI_FORMAT_R8G8B8A8_UNORM so the CopyResource() function works and I also saved the Image to see how it Looks like after CopyResource().

Here it is as DDS (one can see it's clearly my undistorted 2D camera Image, but I'm not sure why it Looks so strange as .dds):

Undistorted ShaderResource Cameraimage as .dds

And here it is the same Image only saved as .jpeg (Looks as expected): UNdistored ShaderResource Cameraimage as .jpeg

This means, all the copying from the ShaderResource in the GPU until the Line of Code

// SaveDDSTextureToFile(devicecontext_, undistortedShaderTex, L"SCREENSHOT.dds");

Seems correct. And As I mentioned above, for testing I only changed the IplImage->imageData to my still distorted camera char* Buffers, and the conversion worked. So it has to be something with the memcpy() part, but I copied that actually from the place where I copied my distorted camera buffers into a GPU ShaderResource which worked like a charm!

Note: CAMERA_CHANNLES are 4 here.

Can anybody see what I did wrong here? Í appreciate every help, thank you! :)

cheers,

  • M.

I could achieve the conversion finally. When reading the rendered ShaderResource from the GPU into my ID3D11Texture2D* undistortedShaderTex, it was sufficient to map the resource and do a single

    memcpy(buffer, mappedData, (screenWidth_ * screenHeight_ * 4));

instead of the Loop to convert from the GPU (swizzled Memory Layout ) to the CPU Memory Layout . So it seems the Memory Layout was already converted somehow. Though this is interesting to me, because when I supplied my distorted camera images as ShaderResources to the GPU, I definitely had to convert it into GPU Memory Layout using the Loop above.

I am glad it works now :)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM