简体   繁体   English

DirectX部分屏幕捕获

[英]DirectX Partial Screen Capture

I am trying to create a program that will capture a full screen directx application, look for a specific set of pixels on the screen and if it finds it then draw an image on the screen. 我正在尝试创建一个程序来捕获全屏Directx应用程序,在屏幕上查找一组特定的像素,如果找到它,则在屏幕上绘制图像。

I have been able to set up the application to capture the screen the directx libraries using the code the answer for this question Capture screen using DirectX 我已经能够使用该代码来设置应用程序以捕获DirectX库的屏幕使用该问题的答案使用DirectX捕获屏幕

In this example the code saves to the harddrive using the IWIC libraries. 在此示例中,代码使用IWIC库保存到硬盘驱动器。 I would rather manipulate the pixels instead of saving it. 我宁愿操纵像素而不是保存像素。

After I have captured the screen and have a LPBYTE of the entire screen pixels I am unsure how to crop it to the region I want and then being able to manipulate the pixel array. 捕获屏幕并获得整个屏幕像素的LPBYTE后,我不确定如何将其裁剪到所需的区域,然后能够操纵像素阵列。 Is it just a multi dimensional byte array? 它只是一个多维字节数组吗?

The way I think I should do it is 我认为我应该做的方式是

  1. Capture screen to IWIC bitmap (done). 将屏幕捕获到IWIC位图(完成)。
  2. Convert IWIC bitmap to ID2D1 bitmap using ID2D1RenderTarget::CreateBitmapFromWicBitmap 使用ID2D1RenderTarget :: CreateBitmapFromWicBitmap将IWIC位图转换为ID2D1位图
  3. Create new ID2D1::Bitmap to store partial image. 创建新的ID2D1 :: Bitmap以存储部分图像。
  4. Copy region of the ID2D1 bitmap to a new bitmap using ID2D1::CopyFromBitmap. 使用ID2D1 :: CopyFromBitmap将ID2D1位图的区域复制到新位图。
  5. Render back onto screen using ID2D1 . 使用ID2D1渲染回屏幕。

Any help on any of this would be so much appreciated. 任何帮助,将不胜感激。

Here is a modified version of the original code that only captures a portion of the screen into a buffer, and also gives back the stride . 这是原始代码的修改版本,它仅将屏幕的一部分捕获到缓冲区中,并且还返回步幅 Then it browses all the pixels, dumps their colors as a sample usage of the returned buffer. 然后,它浏览所有像素,转储其颜色作为返回缓冲区的样本用法。

In this sample, the buffer is allocated by the function, so you must free it once you've used it: 在此示例中,缓冲区是由函数分配的,因此一旦使用它就必须释放它:

// sample usage
int main()
{
  LONG left = 10;
  LONG top = 10;
  LONG width = 100;
  LONG height = 100;
  LPBYTE buffer;
  UINT stride;
  RECT rc = { left, top, left + width, top + height };
  Direct3D9TakeScreenshot(D3DADAPTER_DEFAULT, &buffer, &stride, &rc);

  // In 32bppPBGRA format, each pixel is represented by 4 bytes
  // with one byte each for blue, green, red, and the alpha channel, in that order.
  // But don't forget this is all modulo endianness ...
  // So, on Intel architecture, if we read a pixel from memory
  // as a DWORD, it's reversed (ARGB). The macros below handle that.

  // browse every pixel by line
  for (int h = 0; h < height; h++)
  {
    LPDWORD pixels = (LPDWORD)(buffer + h * stride);
    for (int w = 0; w < width; w++)
    {
      DWORD pixel = pixels[w];
      wprintf(L"#%02X#%02X#%02X#%02X\n", GetBGRAPixelAlpha(pixel), GetBGRAPixelRed(pixel), GetBGRAPixelGreen(pixel), GetBGRAPixelBlue(pixel));
    }
  }

  // get pixel at 50, 50 in the buffer, as #ARGB
  DWORD pixel = GetBGRAPixel(buffer, stride, 50, 50);
  wprintf(L"#%02X#%02X#%02X#%02X\n", GetBGRAPixelAlpha(pixel), GetBGRAPixelRed(pixel), GetBGRAPixelGreen(pixel), GetBGRAPixelBlue(pixel));

  SavePixelsToFile32bppPBGRA(width, height, stride, buffer, L"test.png", GUID_ContainerFormatPng);
  LocalFree(buffer);
  return 0;;
}

#define GetBGRAPixelBlue(p)         (LOBYTE(p))
#define GetBGRAPixelGreen(p)        (HIBYTE(p))
#define GetBGRAPixelRed(p)          (LOBYTE(HIWORD(p)))
#define GetBGRAPixelAlpha(p)        (HIBYTE(HIWORD(p)))
#define GetBGRAPixel(b,s,x,y)       (((LPDWORD)(((LPBYTE)b) + y * s))[x])

int main()

HRESULT Direct3D9TakeScreenshot(UINT adapter, LPBYTE *pBuffer, UINT *pStride, const RECT *pInputRc = nullptr)
{
  if (!pBuffer || !pStride) return E_INVALIDARG;

  HRESULT hr = S_OK;
  IDirect3D9 *d3d = nullptr;
  IDirect3DDevice9 *device = nullptr;
  IDirect3DSurface9 *surface = nullptr;
  D3DPRESENT_PARAMETERS parameters = { 0 };
  D3DDISPLAYMODE mode;
  D3DLOCKED_RECT rc;

  *pBuffer = NULL;
  *pStride = 0;

  // init D3D and get screen size
  d3d = Direct3DCreate9(D3D_SDK_VERSION);
  HRCHECK(d3d->GetAdapterDisplayMode(adapter, &mode));

  LONG width = pInputRc ? (pInputRc->right - pInputRc->left) : mode.Width;
  LONG height = pInputRc ? (pInputRc->bottom - pInputRc->top) : mode.Height;

  parameters.Windowed = TRUE;
  parameters.BackBufferCount = 1;
  parameters.BackBufferHeight = height;
  parameters.BackBufferWidth = width;
  parameters.SwapEffect = D3DSWAPEFFECT_DISCARD;
  parameters.hDeviceWindow = NULL;

  // create device & capture surface (note it needs desktop size, not our capture size)
  HRCHECK(d3d->CreateDevice(adapter, D3DDEVTYPE_HAL, NULL, D3DCREATE_SOFTWARE_VERTEXPROCESSING, &parameters, &device));
  HRCHECK(device->CreateOffscreenPlainSurface(mode.Width, mode.Height, D3DFMT_A8R8G8B8, D3DPOOL_SYSTEMMEM, &surface, nullptr));

  // get pitch/stride to compute the required buffer size
  HRCHECK(surface->LockRect(&rc, pInputRc, 0));
  *pStride = rc.Pitch;
  HRCHECK(surface->UnlockRect());

  // allocate buffer
  *pBuffer = (LPBYTE)LocalAlloc(0, *pStride * height);
  if (!*pBuffer)
  {
    hr = E_OUTOFMEMORY;
    goto cleanup;
  }

  // get the data
  HRCHECK(device->GetFrontBufferData(0, surface));

  // copy it into our buffer
  HRCHECK(surface->LockRect(&rc, pInputRc, 0));
  CopyMemory(*pBuffer, rc.pBits, rc.Pitch * height);
  HRCHECK(surface->UnlockRect());

cleanup:
  if (FAILED(hr))
  {
    if (*pBuffer)
    {
      LocalFree(*pBuffer);
      *pBuffer = NULL;
    }
    *pStride = 0;
  }
  RELEASE(surface);
  RELEASE(device);
  RELEASE(d3d);
  return hr;
}

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM