[英]How to Handle A Large Amount of Objects
I have a question about how to handle a very large number of objects to increase performance. 我有一个关于如何处理大量对象以提高性能的问题。 I'm creating a 2D game with infinite block-like terrain, and obviously this is going to come with some performance issues.
我正在创建一个具有无限块状地形的2D游戏,显然这会带来一些性能问题。
The way I've come up with is to see if the player's X value has reached a multiple of 1000, and then I take every single block that is already in a save file or in the game world, and save it to the file. 我想出的方法是查看玩家的X值是否达到1000的倍数,然后我将保存文件或游戏世界中已经存在的每个块都保存到文件中。 Afterwards, I destroy each block in the game world.
之后,我销毁游戏世界中的每个方块。 After that, I loop through every block saved in the file and test if it's within a certain radius.
之后,我遍历文件中保存的每个块,并测试它是否在特定半径内。 If it is, I create that block.
如果是,则创建该块。
However, I'm not even sure this is efficient at all. 但是,我什至不确定这是否有效。 Every time I reach a multiple of 1000, the game freezes for a second or two, and after adding some print statements, it seems the majority of the time is spent reading the file.
每当我达到1000的倍数时,游戏就会冻结一两秒钟,并且在添加一些打印语句后,似乎大部分时间都花在了读取文件上。 Is there a better way to handle this that I'm missing?
有没有更好的方法来处理我丢失的问题?
I'm actually working on such a game myself, so my approach may differ from what's best for you. 我实际上是在自己开发这样的游戏,所以我的方法可能与最适合您的方法有所不同。
I personally use one of Google's Guava caches , with a removal listener. 我个人使用了带有删除监听器的Google的Guava缓存之一。 When a object is removed for any reason other than my removing it manually, I write it to disk.
当出于除我手动删除对象之外的其他原因删除对象时,我会将其写入磁盘。 An example would be:
一个例子是:
LoadingCache<Position2D, BlockOfTiles> graphs = CacheBuilder.newBuilder()
.maximumSize(10000)
.expireAfterWrite(10, TimeUnit.MINUTES)
.removalListener(new MyTileMapListener())
.build(
new CacheLoader<Position2D, BlockOfTiles {
public Graph load(Key key) throws AnyException {
return loadFromDisk(key);
}
});
MyTileMapListner might look something like this: MyTileMapListner可能看起来像这样:
private class MyTileMapListener implements RemovalListener<Position2D, MyBlockOfTiles>{
@Override
onRemoval(RemovalNotification<K,V> notification){
if(notification.getCause()==RemovalCause.EXPLICIT) return;
writeToDisk(notification.getKey(), notification.getValue());
}
}
Currently, I'm able to keep 30K objects nicely loaded, in spite of their having large int[][][]
arrays as fields. 目前,尽管它们具有作为字段的
int[][][]
数组,但我仍然能够很好地加载30K对象。
A word of caution: The caches create threads internally. 注意:缓存在内部创建线程。 Make sure to use at least basic synchronization to keep the removal listener from interfering with your main thread when trying to write.
确保至少使用基本同步,以防止删除侦听器在尝试编写时干扰您的主线程。 Something as simple as
synchronized(
someCommonObject ){ // read or write }
would work, and would be fairly idiomatic if the common object is your file output stream or something. 像
synchronized(
someCommonObject ){ // read or write }
这样简单的方法就可以工作,并且如果公共对象是文件输出流或其他东西,则是相当惯用的。 Databases like LevelDB usually handle this for you. 像LevelDB这样的数据库通常会为您处理此问题。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.