简体   繁体   English

单个或多个SDL_Renderer

[英]Single or multiple SDL_Renderer

I'm slowly designing structure of classes involved in my "game", and I wonder if SDL_Renderer should be a common resource, synchronized across multiple threads or each different object should have its own Renderer (ex. each enemy refers to its own renderer to make its presence on the window). 我正在慢慢设计“游戏”中涉及的类的结构,并且我想知道SDL_Renderer是否应该是一个公共资源,可以在多个线程之间同步,还是每个不同的对象都应该拥有自己的Renderer(例如,每个敌人都引用自己的Renderer来使其出现在窗户上)。 Any advice would be appreciated, best with an explanation. 任何建议将不胜感激,最好有一个解释。

Thanks in advance. 提前致谢。

Just a single renderer 只是一个渲染器

As keltar pointed out, the API is not made to be used like this. 正如keltar所指出的,API并非像这样使用。 But that's not a flaw of the API. 但这不是API的缺陷。 The underlying work done by the GPU is just very different from the "render this and that" calls done in the C API, so concurrency doesn't translate 1:1 like that. GPU完成的基础工作与C API中的“渲染此并那个”调用非常不同,因此并发不会像这样进行1:1转换。 In a sense, the CPU is mostly just telling the GPU what to do, and telling it twice the things at once won't make it work twice as fast. 从某种意义上讲,CPU大多只是告诉GPU该怎么做,并且一次告诉两次事情不会使它的运行速度快两倍。

Not fast enough? 不够快?

If you're writing a 2D game for any platform that's relevant today, then SDL2 should be fast enough in most cases. 如果您正在为当今相关的任何平台编写2D游戏,那么SDL2在大多数情况下应该足够快。 If your game isn't running fast enough, perhaps something else is going wrong and something on the CPU side is causing a bottleneck. 如果您的游戏运行速度不够快,则可能是其他原因出了问题,而CPU方面的问题正在引起瓶颈。 Are you sure you're using hardware accelerated rendering? 您确定要使用硬件加速渲染吗? Perhaps some setting went wrong and it's using the software renderer as fallback - call SDL_GetCurrentVideoDriver() and chek the result to make sure that this is not the case. 也许某些设置出错了,并且使用软件渲染器作为后备-调用SDL_GetCurrentVideoDriver()并检查结果以确保不是这种情况。 Other things that might bottleneck the performance are any usages of SDL_Surface. SDL_Surface的任何用法都可能导致性能瓶颈。 Also, loading lots of data to the GPU all the time can cause it to slow down, so make sure to not load sprites more oftne than neded. 另外,一直将大量数据加载到GPU会导致其速度变慢,因此请确保加载的Sprite的数量不超过已加载的数量。 For instance, when the same monster is loaded again, then you can reuse the textures you already loaded instead of loading them again. 例如,当再次加载相同的怪物时,可以重新使用已经加载的纹理,而不必再次加载它们。 Another example is SDL_TTF, this library is slow. 另一个示例是SDL_TTF,该库很慢。 Rendering some HP number for a monster is much slower than rendering the monster in the first place - there are alternatives where the glyphs are prerendered and stored as textures. 为怪物绘制一些HP数字要比首先渲染怪物慢得多-有些替代方法是预先绘制字形并将其存储为纹理。

Rendering textures faster 更快地渲染纹理

Having said that, rendering lots of small sprites does have somewhat of an overhead and if you need to render an extreme amount of sprites, it might not be fast enough even if you are doing everything right. 话虽如此,渲染大量小的Sprite确实会有一些开销,如果您需要渲染大量Sprite,即使您做对了所有事情,也可能不够快。 There are libraries such as SDL_gpu that can leverage techniques such as texture batching to increase rendering speed in some cases. 在某些情况下,诸如SDL_gpu之类的库可以利用诸如纹理批处理之类的技术来提高渲染速度。 You can also work directly with OpenGL, but that's what libraries like SDL_gpu do anyway. 您也可以直接使用OpenGL,但这还是SDL_gpu之类的库所要做的。 Last but not least, SDL2 supports Vulkan now and while I haven't had the opportunity to try that yet, it supposedly is much more efficient when it comes to API calls on the CPU side. 最后但并非最不重要的一点是,SDL2现在支持Vulkan,虽然我还没有机会尝试过Vulkan,但据说在CPU端进行API调用时,它的效率要高得多。 So if that's your bottleneck and using Vulkan is a possibility, it may be worth a shot. 因此,如果这是您的瓶颈,并且有可能使用Vulkan,那么值得一试。

Sorry if most of this doesn't even apply to your question. 抱歉,如果其中大部分都不适用于您的问题。 Perhaps you don't actually have a bottneck/performance issue, but were just curious about how the API would be used optimally. 也许您实际上并没有遇到瓶颈/性能问题,但只是对如何最佳使用API​​感到好奇。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM