I'm trying to implement a real motion blur using OpenGL, but without the accumulation buffer (due to it not working on my graphics card). Here is my idea for the implementation:
Is there any simpler/faster/more optimized way than this? Or is this the best solution?
What NicolBolas says in his comments is correct: To get real motion blur you must apply vector blur which is controlled by each fragment's speed; calculate the screen space speed of each vertex and pass that as another uniform to the fragment shader. Then apply a vector blur in the direction and distance of the fragment's speed.
Since this will blur with other fragments you're ending up with a problem of transparency ordering. Hence you should apply this as a post processing effect, ideally with depth peeled layers. You can save on the depth sorting complexity by using a backlog of previously rendered frames to blend into, which is essentially the framebuffer method you suggested, with vector blur added.
There are two general approaches to doing this kind of thing:
It's not trivial to tell which of these methods would work better. Method (2) has the disadvantage that you're making many passes, therefore a lot of overhead. Method (1) will be bottlnecked at the texture reads. Although in method (1) you still ultimately read all the data you would in method (2), in method (1) you'll be able to take advantage of multiple cache lines for the texture memory fetches. So the two most important factors that determine performance here are
a) How many "subframes" you have b) How big your screen is and thus how big the textures to be read and written to will be.
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.