简体   繁体   中英

Real-time 2D rendering to system memory

By writing a video filter which processes image/frame data in system ( not video ) memory, renders 2D graphics, should be cross-platform (at least Windows, Linux & OSX ) and work very fast, i looked over different renderers, like Cairo , AGG and many other projects. I'd prefer no GPU requirement, so my focus was on software rendering so far.

Regrettably Cairo could become very slow on complex paths & gradients and produced ugly geometry faults by tiny path segments, AGG was slow too because of missing optimizations which requires a lot of work by users, other projects just rendered to windows or performance wasn't important for them. Blend2D made me curious but takes his time to grow mature.

Now i'm asking me: should i just render to an OpenGL framebuffer and do the 2D-to-3D stuff by a geometry library, take the challenge to develop a software renderer from scratch (accelerated by SIMD, threaded pipeline, etc.) or did i miss a library which is fitting for me?

Is pushing everything graphics related to GPU always worth because of cheap data transfer to & from video memory these days, even with 1080p or larger images?

A typical desktop CPU has 4 processing cores running at 2.5GHz. A modern desktop GPU (circa 2010) has 48 to 480 shaders running at 1.4GHz. So in terms of raw processing power, the GPU can process graphics 7 to 70 times faster than the CPU.

As for transfer bandwidth, a 1080p image is 1920x1080 pixels, the frame rate is 30 frames per second, and a pixel is 4 bytes (32 bits). So the total bus bandwidth required for realtime 1080p processing is

1920 x 1080 x 30 x 4 = 248 MB/s

The bandwidth of a PCI Express 2.0 x16 slot is 8000 MB/s. Which is to say that the CPU to GPU transfer rate is not an issue.

So, to answer your question: Yes, pushing everything graphics related to the GPU is always worth it, even with 1080p or larger images.

Alternative to OpenCV could be SDL . There is at least a tutorial and documentation that specifically talks about streaming video data to the hardware . Of course, you'll have to rewrite your filter in glsl to get any decent acceleration.

That said, it sounds like your problem may benefit more from a GPU-compute solution such as OpenCL or Cuda. In this case there's no rendering involved, instead you send your data to a GPU-kernel, and get it back when it is processed. Or can send it to OpenGL/DirectX (the video memory can be reused as texture quite easily without performance loss) for rendering. If you are not keen on moving beyond the OpenGL api, you can also use a compute-shader . Works like a traditional shader except that it computes in a pass, but with a few additional constraints and restrictions.

Have you tried OpenCV ?

It provides both the software and the hardware renderer with the huge library of highly optimized functions for the real-time image processing and rendering.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM