简体   繁体   中英

How do I add an outline to a 2d concave polygon?

I'm successfully drawing the convex polys which make up the following white concave shape.

大纲示例

The orange color is my attempt to add a uniform outline around the white shape. As you can see it's not so uniform. On some edges the orange doesn't show at all.

Evidently using...

glScalef(1.1, 1.1, 0.0);

... to draw a slightly larger orange shape before I drew the white shape wasn't the way to go.

I just have a nagging feeling I'm missing a more simple way to do this.

Note that the white part is going to be mapped with a texture which has areas of transparency, so the orange part needs to be behind the white shapes too, not just surrounding them.

Also, I'm using a parallel projection matrix, that's why glScalef's z is set to 0.0 - reminds me there is no perspective scaling.

Any ideas? Thanks!

Nope, you wont be going anywhere with glScale in this case. Possible options are

a) construct an extruded polygon from the original one (possibly rounding sharp corners)

b) draw the polygon with GL_LINES and set glLineWidth to your desired outline width (in fact you might want to draw the outline with 2x width first)

The first approach will generate CPU load, the second one might slow down rendering significantly AFAIK.

You can displace your polygon in the 8 directions of the compass. You can have a look at this link: http://simonschreibt.de/gat/cell-shading/

It's a nice trick, and might do the job

遗憾的是,没有简单的方法来获得一致宽度的轮廓 - 您只需要进行数学计算:对于每条边:计算法线,缩放到所需的宽度,然后添加到边顶点以获得新的线段扩展边缘计算通过两个相邻线段的线的交点以查找扩展的顶点位置

A distinct answer from those offered to date, posted just for interest; if you're in GLES 2.0 have access to shaders then you could render the source polygon to a framebuffer with a texture bound as the colour renderbuffer, then do a second parse to write to the screen (so you're using the image of the white polygon as the input texture and running a post-processing pixel shader to every pixel on the screen) with a shader that obeys the following logic for an outline of thickness q:

  • if the input is white then output a white pixel
  • if the input pixel is black then sample every pixel within a radius of q from the current pixel; if any one of them is white then output an orange pixel, otherwise output a black pixel

In practise you'd spend an awful lot on texture sampling and probably turn that into the bottleneck. And they'd be mostly dependent reads, which are bad for the pipeline on lots of GPUs — including the PowerVR SGX that powers the overwhelming majority of OpenGL ES 2.0 devices.

EDIT: actually, you could speed this up substantially; if your radius is q then have the hardware generate mip maps for your framebuffer object, take the first one for which the output pixels are at least q by q in the source image. You've then essentially got a set of bins that'll be pure black if there were no bits of the polygon in that region and pure white if that area was entirely internal to the polygon. For each output fragment that you're considering might be on the border you can quite possibly just straight to a conclusion of definitely in or definitely out and beyond the border based on four samples of the mipmap.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM