简体   繁体   English

OpenGL ES 2.0:可以安全地假设混合在移动设备上相对便宜吗?

[英]OpenGL ES 2.0: Safe to assume that Blending is relatively cheap on Mobile Devices?

All over the place, I read what a serious performance hit Blending was, until I came across a comment that it was not so expensive on iOS devices due to their architecture. 到处都是,我读到了Blending的一个重要表现,直到我发现一条评论,由于它们的架构,它在iOS设备上并不那么昂贵。

Now, the wonderfully uber-controlled world of Apple is a bit different from Android's, but I've done some tests and it looks like my Blending is only half as bad for performance than switching from RGB555 to RGBA8888 (on the two devices I tried). 现在,美妙的超级控制的Apple世界与Android有点不同,但是我做了一些测试,看起来我的Blending只是性能的一半,而不是从RGB555切换到RGBA8888(在我试过的两个设备上) )。

Questions: 问题:

  • Is there any rule of thumb that, while Android devices can differ substantially regarding their hardware, their "GPU computational power by screen resolution ratio" does not fall below a certain threshold? 有没有经验法则,虽然Android设备的硬件差异很大,但是“屏幕分辨率的GPU计算能力”不会低于某个阈值?

  • Does such a rule also apply to Blending? 这样的规则是否也适用于混合?

  • Is there a list of cornerstone test devices somewhere which is the result of some systematic market analysis? 某些基础测试设备列表是否是某些系统市场分析的结果? In the form of: If it runs on these devices, it'll run pretty much on any reasonable device? 以下形式:如果它在这些设备上运行,它会在任何合理的设备上运行吗?

  • Do you use blending, and what experience does it give your customers? 您是否使用混合,以及它为您的客户带来了什么样的体验?

I see alternatives to using blending so I'm interested to know either what to invest in or whether I should avoid, hmm, the unknown. 我看到使用混合的替代方案,所以我很想知道要投资什么或者是否应该避免,嗯,未知。

I doubt there is any rule of thumb, also in Apple world, tomorrow they may decide to switch to other architecture and all your assumptions are screwed. 我怀疑是否有任何经验法则,在Apple世界,明天他们可能会决定转向其他架构并且你所有的假设都被搞砸了。 For Android, the are too many vendors and architectures to decide which threshold of blending is suppose to be good for each one. 对于Android,有太多的供应商和架构来决定哪个混合阈值对每个阈值都有好处。 The only rule of thumb is to use blending when you really need to. 唯一的经验法则是在您真正需要时使用混合。 So the answer to your question is: It's NOT safe to assume blending is relative cheap on Mobile Devices. 所以你的问题的答案是:假设混合在移动设备上相对便宜是不安全的。 Blending requires reading from memory, and memory is slow on mobile devices. 混合需要从内存中读取,并且移动设备上的内存很慢。

I've done some additional research which I thought I'd share. 我做了一些我认为我会分享的额外研究。 If you appreciate my investigation then let me know (upvote?) and I'll probably share more as I learn more. 如果你欣赏我的调查,那么让我知道(upvote?),我可能会分享更多,因为我了解更多。

With only 1.8% of Android devices running a version prior to API 8/2.2/Froyo, 98.2% of Android devices support OpenGL ES 2.0 (although the 2.2 API has some flaws). 由于只有1.8%的Android设备运行API 8 / 2.2 / Froyo之前的版本,因此98.2%的Android设备支持OpenGL ES 2.0(尽管2.2 API存在一些缺陷)。

I found the following peak/max/... information for the weakest chips of the above brands which support GLES20 (given maximum clock frequency): 我找到了以下品牌的最高/最大/最新信息,这些芯片支持GLES20(给定最大时钟频率):

  • Adreno 200 (22 million triangles per second, 133 million pixels per second) Adreno 200(每秒2200万个三角形,每秒1.33亿个像素)
  • Mali-200 (16 mtps, 275 mpps) Mali-200(16 mtps,275 mpps)
  • PowerVR SGX520 (7 mtps, 100 mpps) PowerVR SGX520(7 mtps,100 mpps)
  • Tegra APX 2500 (40 mtps, 400 or 600 mpps depending on source) Tegra APX 2500(40 mtps,400或600 mpps,具体取决于来源)

The cheapest Android device I could find as per today uses a 我今天能找到的最便宜的Android设备使用了

  • MARVELL PXA910 800MHz (10 or 20 mtps, 200 mpps) MARVELL PXA910 800MHz(10或20 mtps,200 mpps)

where the 20 million triangles per second appear to be a theoretical maximum if half of the triangles need not be drawn (by means of culling). 如果不需要绘制一半的三角形(通过剔除),那么每秒2000万个三角形似乎是理论上的最大值。

I find the fact that Marvell differentiate between two peak values a bit suspicious; 我发现Marvell区分两个峰值有点可疑; maybe one should be a bit sceptical about the Tegra figures which I found on a marketing slide and a forum. 也许我应该对我在营销幻灯片和论坛上发现的Tegra数据持怀疑态度。 Also, I have a device with a PowerVR SGX530 (14 mtps, 200 mpps) which renders my test app for an 800x480 screen at least with the same speed as my Tegra 2 T20 devices (71 mtps, 1200 mpps) which I tried at 1024x600. 此外,我有一个配备PowerVR SGX530(14 mtps,200 mpps)的设备,它使我的测试应用程序至少与我的Tegra 2 T20设备(71 mtps,1200 mpps)相同的速度800x480屏幕,我尝试在1024x600 。

I have two identical devices with the Tegra 2 T20, and the one running Android 3 irenders my test app faster than the one running Android 2. Both run unofficial Android releases, though. 我有两个与Tegra 2 T20相同的设备,而运行Android 3的设备比运行Android 2的设备更快地运行我的测试应用程序。但两者都运行非官方的Android版本。 I thought this might lead to suboptimal GPU utilization but the CPU load is shown as ridiculously low. 我认为这可能导致GPU利用率不理想,但CPU负载显示为非常低。 Maybe there's SurfaceView overhead which starts to get significant beyond a certain frame rate -- but the PowerVR device also runs Android 2. 也许SurfaceView的开销会超出一定的帧速率 - 但PowerVR设备也运行Android 2。

This has nothing much to do with alpha blending so far (except I read that on Nvidia chips that's implicitly done in the fragment shaders) but I felt it would be worth documenting there starting points. 到目前为止,这与alpha混合没什么关系(除了我在Nvidia芯片上读到的那些隐含在片段着色器中完成的内容),但我觉得值得记录起点。 But stay tuned for updates. 但请继续关注更新。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM