简体   繁体   English

writeToFile:atomically: 是否会阻止异步读取?

[英]Does writeToFile:atomically: blocks asynchronous reading?

A few times while using my application I am processing some large data in the background.有几次在使用我的应用程序时,我正在后台处理一些大数据。 (To be ready when the user needs it. Something kind of indexing.) When this background process finished it needs to the save the data in a cache file, but since this is really large it take some seconds. (在用户需要时准备好。某种索引。)当这个后台进程完成后,它需要将数据保存在一个缓存文件中,但由于它真的很大,需要几秒钟的时间。

But at the same time the user may open some dialog which displays images and text loaded from the disk.但同时用户可能会打开一些对话框,显示从磁盘加载的图像和文本。 If this happens at the same time while the background process data is saved, the user interface needs to wait until the saving process is completed.如果在保存后台进程数据的同时发生这种情况,则用户界面需要等待保存进程完成。 (This is not wanted, since the user then have to wait 3-4 seconds until the images and texts from the disk are loaded!) (这是不想要的,因为用户必须等待 3-4 秒才能加载磁盘中的图像和文本!)

So I am looking a way to throttling the writing to disk.所以我正在寻找一种方法来限制写入磁盘。 I thought of splitting up the data in chunks and inserting a short delay between saving the different chunks.我想将数据分成块并在保存不同块之间插入一个短暂的延迟。 In this delay, the user interface will be able to load the needed texts and images, so the user will not recognize a delay.在此延迟中,用户界面将能够加载所需的文本和图像,因此用户不会识别出延迟。

At the moment I am using [[array componentsJoinedByString:'\\n'] writeToFile:@"some name.dic" atomically:YES] .目前我正在使用[[array componentsJoinedByString:'\\n'] writeToFile:@"some name.dic" atomically:YES] This is very high-level solution which doesn't allow any customization.这是一个非常高级的解决方案,不允许任何定制。 How can I implement without large data into one file without saving all the data as one-shot?如何在没有大数据的情况下实现一个文件而不将所有数据保存为一次?

Does writeToFile:atomically: blocks asynchronous reading? writeToFile:atomically: 是否会阻止异步读取?

No. It is like writing to a temporary file.不。这就像写入临时文件一样。 Once completed successfully, then renaming the temporary file to the destination (replacing the pre-existing file at the destination, if it exists).成功完成后,将临时文件重命名为目标(替换目标中预先存在的文件,如果存在)。

You should consider how you can break your data up, so it is not so slow.你应该考虑如何分解你的数据,所以它不会那么慢。 If it's all divided by strings/lines and it takes seconds, and easy approach to divide the database would be by first character.如果全部按字符串/行划分并且需要几秒钟,那么划分数据库的简单方法就是按第一个字符。 Of course, a better solution could likely be imagined, based on how you access, search, and update the index/database.当然,根据您访问、搜索和更新索引/数据库的方式,可能会想到更好的解决方案。

…inserting a short delay between saving the different chunks. ...在保存不同块之间插入一个短暂的延迟。 In this delay, the user interface will be able to load the needed texts and images, so the user will not recognize a delay.在此延迟中,用户界面将能够加载所需的文本和图像,因此用户不会识别出延迟。

Don't.别。 Just implement the move/replace of the atomic write yourself (writing to a temporary file during index and write).只需自己实现原子写入的移动/替换(在索引和写入期间写入临时文件)。 Then your app can serialize read and write commands explicitly for fast, consistent and correct accesses to these shared resources.然后您的应用程序可以显式序列化读写命令,以快速、一致和正确地访问这些共享资源。

You have to look to the class NSFileHandle.您必须查看类 NSFileHandle。 Using combination of seekToEndOfFile and writeData:(NSData *)data you can do the work you wish.使用 seekToEndOfFile 和 writeData:(NSData *)data 的组合你可以做你想做的工作。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM