简体   繁体   English

WebGL:有没有一种有效的方法只将图像/画布的一部分上传为纹理?

[英]WebGL: Is there an efficient way to upload only part of an image/canvas as a texture?

I am working on a 2D layer-based application, and I'd like to do the compositing using WebGL. 我正在开发一个基于2D层的应用程序,我想使用WebGL进行合成。 The layers might be shifted relative to each other, and each frame only a small (rectangular) part of each layer may change. 这些层可以相对于彼此移位,并且每个帧仅每个层的小(矩形)部分可以改变。 However, the width and height of that rectangular part may vary unpredictably. 然而,该矩形部分的宽度和高度可能不可预测地变化。 I would like to use one canvas (2D) and one texture per layer, and each frame redraw on each canvas only the part of the layer that has been modified, then simply upload that small area to the GPU to update the corresponding part to texture, before the GPU makes the compositing for me. 我想在每个图层上使用一个画布(2D)和一个纹理,每个画布在每个画布上仅重绘已修改的图层部分,然后将该小区域上传到GPU以将相应的部分更新为纹理,在GPU为我做合成之前。 However I have not found an efficient way to upload just a part of an image to a part of a texture. 但是我还没有找到一种将图像的一部分上传到纹理的一部分的有效方法。 It seems that texSubImage2D() can update a part of a texture, but only takes full images/canvases, and it does not seem to be possible to specify a rectangular area of the image to use. 似乎texSubImage2D() 可以更新纹理的一部分,但只采用完整的图像/画布,并且似乎不可能指定要使用的图像的矩形区域。

I have thought of a few ways of doing this, but each seems to have obvious overhead: 我想到了一些这样做的方法,但每个方法似乎都有明显的开销:

  • use getImageData() + texSubImage2D() to only upload to the GPU the part that changes (overhead in converting the canvas pixel data to ImageData) 使用getImageData() + texSubImage2D()只向GPU上传更改的部分(将画布像素数据转换为ImageData的开销)
  • re-upload the whole layer canvas each frame with texImage2D() texImage2D()重新上传每一帧的整个图层画布
  • or create/resize a small canvas2D to the correct dimension to fit perfectly for each layer modification, then use texSubImage2D() to send it to update the related texture (memory reservation overhead) 或创建/调整一个小canvas2D到正确的尺寸,以适合每个图层修改,然后使用texSubImage2D()发送它来更新相关的纹理(内存预留开销)

So, is there a way to specify a part of an image/a canvas for the texture ? 那么,有没有办法为纹理指定图像/画布的一部分? Something like texImagePart2D() and texSubImagePart2D , which would both accept four more parameters, sourceX , sourceY , sourceWidth and sourceHeight to specify the rect area of the image/canvas to use ? texImagePart2D()texSubImagePart2D这样的东西,它们都会接受另外四个参数, sourceXsourceYsourceWidthsourceHeight来指定要使用的图像/画布的矩形区域?

Unfortunately no there is no way to upload a portion of a canvas/image. 遗憾的是,没有办法上传画布/图像的一部分。

OpenGL ES 2.0 on which WebGL is based does not provide a way to do that. WebGL所基于的OpenGL ES 2.0没有提供这样做的方法。 OpenGL ES 3.0 does provide a way to upload smaller rectangle of the source to a texture or portion of a texture so maybe the next version of WebGL will provide that feature. OpenGL ES 3.0提供了一种将较小的源矩形上传到纹理或纹理部分的方法,因此下一版本的WebGL可能会提供该功能。

For now you could have a separate canvas to help upload. 现在你可以有一个单独的画布来帮助上传。 First size the canvas to the match the portion you want to upload 首先调整画布大小以匹配您要上传的部分

canvasForCopying.width = widthToCopy;
canvasForCopying.height= heightToCopy;

Then copy the portion of the canvas you wanted to copy to the canvas for copying 然后将要复制的画布部分复制到画布进行复制

canvasForCopying2DContext.drawImage(
    srcCanvas, srcX, srcY, widthToCopy, heightToCopy,
    0, 0, widthToCopy, heightToCopy);

Then use that to upload to the texture where you want it. 然后使用它上传到您想要的纹理。

gl.texSubImage2D(gl.TEXTURE_2D, 0, destX, destY, gl.RGBA, gl.UNSIGNED_BYTE, 
                 canvasForCopying);

getImageData will likely be slower because the browser has to call readPixels to get the image data and that stalls the graphics pipeline. getImageData可能会更慢,因为浏览器必须调用readPixels获取图像数据并停止图形管道。 The browser does not have to do that for drawImage . 浏览器不必为drawImage执行此操作。

As for why texImage2D can sometimes be faster than texSubImage2D it depends on the driver/GPU but apparently sometimes texImage2D can be implemented using DMA whereas texSubImage2D can not. 至于为什么texImage2D有时比texSubImage2D更快,它取决于驱动程序/ GPU,但显然有时texImage2D可以使用DMA实现,而texSubImage2D则不能。 texImage2D can also be pipelined by making a new texture and lazily discarding the old one where as texSubImage2D can't. texImage2D也可以通过制作一个新纹理进行流水线处理,并texImage2D地丢弃旧的纹理,而texSubImage2D则不能。

Personally I wouldn't worry about it but if you want to check, time uploading tens of thousands of textures using both texImage2D and texSubImage2D (don't time just one as graphics are pipelined). 我个人不会担心它,但如果你想检查,使用texImage2DtexSubImage2D上传数万个纹理的时间(不要只用一个图形是流水线的)。 You'll probably find texSubImage2D is faster if your texture is large and the portion you want to update is smaller than 25% of the texture. 如果纹理很大并且要更新的部分小于纹理的25%,您可能会发现texSubImage2D更快。 At least that's what I found last time I checked. 至少那是我上次检查时发现的。 Most current drivers are at least optimized in that if you call texSubImage2D and happen to be replacing the entire contents they'll call the texImage2D code internally. 大多数当前的驱动程序至少是优化的,如果你调用texSubImage2D并碰巧正在替换整个内容,他们将在内部调用texImage2D代码。

update 更新

There's a couple of things you can do 你可以做几件事

  1. For images you can use fetch and ImageBitmap to load a portion of an image into an ImageBitamp which you can then upload that. 对于图像,您可以使用fetchImageBitmap将图像的一部分加载到ImageBitamp ,然后您可以将其上载。 Example of getting a portion of an image 获取图像的一部分的示例

    In the example below we call fetch , then get a Blob and use that blob to make an ImageBitmap with only a portion of the image. 在下面的示例中,我们调用fetch ,然后获取Blob并使用该blob来ImageBitmap仅包含图像一部分的ImageBitmap The result can be passed to texImage2D but in the interest of brevity the sample just uses it in a 2d canvas. 结果可以传递给texImage2D但为了简洁起见,样本只是在2D画布中使用它。

 fetch('https://i.imgur.com/TSiyiJv.jpg', {mode: 'cors'}) .then((response) => { if (!response.ok) { throw response; } return response.blob(); }) .then((blob) => { const x = 451; const y = 453; const width = 147; const height = 156; return createImageBitmap(blob, x, y, width, height); }).then((bitmap) => { useit(bitmap); }).catch(function(e) { console.error(e); }); // -- just to show we got a portion of the image function useit(bitmap) { const ctx = document.createElement("canvas").getContext("2d"); document.body.appendChild(ctx.canvas); ctx.drawImage(bitmap, 0, 0); } 

  1. In WebGL2 there are the gl.pixelStorei settings 在WebGL2中有gl.pixelStorei设置

    UNPACK_ROW_LENGTH // how many pixels a row of the source is UNPACK_SKIP_ROWS // how many rows to skip from the start of the source UNPACK_SKIP_PIXELS // how many pixels to skip from the left of the source UNPACK_ROW_LENGTH //源行的行数是多少UNPACK_SKIP_ROWS //从源头开始跳过多少行UNPACK_SKIP_PIXELS //从源的左边跳过多少像素

    so using those 3 settings you can tell webgl2 the source is wider but the portion you want from it is smaller. 因此,使用这3个设置,您可以告诉webgl2源更宽,但您想要的部分更小。 You pass a smaller width to texImage2D and the 3 settings above help tell WebGL how to get out the smaller portion and where to start. 您将较小的宽度传递给texImage2D ,上面的3个设置有助于告诉WebGL如何获取较小的部分以及从何处开始。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM