简体   繁体   中英

2D OpenGL ES architecture

The app I'm developing displays a large-ish (about 1M vertices), static 2D image. Due to memory limitation issues I have been filling the VBOs with new data every time the user scrolls or zooms, giving the impression that the entire image "exists", even though it doesn't. 在此处输入图片说明 There are two problems with this approach: 1) although the responsiveness is "good enough", it would be better if I could make the scrolling zooming faster and less choppy. 2) I've been sticking to the 64k vertex limit that can be done in a single draw, which puts a hard limit of how much of the image can be shown at a time. It would be nice to be able to see more of the image, or even all of it at the same time. Although the performance at the moment is, again, good enough because we are at the prototype stage and have set up the data to work with the limitations, to go to the product level we will have to get rid of this limitation.

Recently I discovered that by using the "android:largeHeap" option I can get 256 MB of heap space on a Motorola Xoom, which means that I can store the entire image in VBOs. In my ideal world I would simply pass the the OpenGL engine the VBOs and either tell it that the camera has moved, or use glScale/glTranslate to zoom/scroll.

My questions are these: am I on the right track? Should I always "draw" all of the chunks and let OpenGl figure out which will actually be seen, or figure out which chunks are visible myself? Any difference between using something like gluLookAt and glScale/glTranslate?

I don't care about aspect ratio distortion (the image is mathematically generated, not a photo), it is much wider than it is high, and in the future the number of vertexes could get much, much larger (eg 60M). Thanks for your time. 在此处输入图片说明

Never let OpenGL figure out itself what's on screen. He won't. All vertices will be transformed, and all those who arent' on screen will be clipped; but you have a better knowledge that OpenGL about your scene.

Using a huuuge 256Mo VBO will make you render the whole scene each time, and transforme ALL vertices each time, which isn't good for preformance.

Make a number of small VBOs (eg a 3x3 grid only should be visible at any moment), and display only those that are visible. Optionally pre-fill future VBOs based on movement extrapolation...

There is no difference between gluLookAt and glTranslate/... . Both compute matrices, that's it.

By the way, if your image is static, can't you precompute it ( à la Google Maps ) ? Similarly, does your data offer some way to be "reduced" when zoomed out ? eg for a point cloud, only display 1 out of N points...

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM