简体   繁体   English

ElectronJS 缓存

[英]ElectronJS cache

I'm building an offline app using Electron JS and react JS.我正在使用 Electron JS 和 react JS 构建一个离线应用程序。 of the launch steps, the first is to load a huge file (more than 1 GB and it cannot be split) so I have to wait around 50-60 seconds until this file is fully loaded.在启动步骤中,首先是加载一个巨大的文件(超过 1 GB 并且无法拆分),因此我必须等待大约 50-60 秒才能完全加载该文件。 Is there a way to load it on the first launch and then save it to cache, so that the next time I start my app it won't take that much time?有没有办法在第一次启动时加载它,然后将其保存到缓存中,这样下次我启动我的应用程序时就不会花那么多时间了?

You cannot cache data in memory until the next app launch.在下一次应用启动之前,您无法在 memory 中缓存数据。 When the app is closed, the data is gone.当应用程序关闭时,数据就消失了。 There's not many ways to solve it:解决的方法不多:

Method 1 (cache data until the app is closed):方法一(缓存数据直到应用关闭):

Read the file asynchronously (so it doesn't freeze the app) once on launch and cache it in memory by storing the data in a variable.在启动时异步读取文件(因此它不会冻结应用程序),并通过将数据存储在变量中将其缓存在 memory 中。 The app will take >1GB of RAM and the cache will disappear when the app is closed.该应用程序将占用超过 1GB 的 RAM,并且当应用程序关闭时缓存将消失。

Method 2 (read data in chunks):方法二(分块读取数据):

Do you need to work on the whole dataset at once?您需要一次处理整个数据集吗? If not, do not read the whole file, separate it into multiple chunks (files) and read specific files when you need them during the runtime.如果没有,请不要读取整个文件,将其分成多个块(文件)并在运行时需要时读取特定文件。

Method 3 (compute while reading):方法3(边读边计算):

You you only need the data to calculate something once, read the json from the drive with a stream and perform computations in real time on every JSON object, during the stream. You you only need the data to calculate something once, read the json from the drive with a stream and perform computations in real time on every JSON object, during the stream. By the end of the stream all the needed computations will be done and the app will not need ~1 GB of RAM.到 stream 结束时,所有需要的计算都将完成,应用程序将不需要约 1 GB 的 RAM。 There's a few modules for this: stream-json , bfj , big-json有几个模块: stream-jsonbfjbig-json

Usually you don't need the entire file in the first place.通常,您首先不需要整个文件。 Can you share the nature of this big file to help better understand the problem?您能否分享这个大文件的性质以帮助更好地理解问题?

"Cache" in memory is not possible, memory will be wipe out when the app quits. memory 中的“缓存”是不可能的,当应用程序退出时 memory 将被清除。 That's just how memory works.这就是 memory 的工作原理。

But the thing is, reading 1GB into memory shouldn't take 60s, that's way too long.但问题是,将 1GB 读入 memory 不应该花费 60 秒,这太长了。 I guess there's other bottlenecks you need to find out.我想您还需要找出其他瓶颈。

My suggestion is try read the file synchronously, and measure the actual time it takes.我的建议是尝试同步读取文件,并测量实际花费的时间。 You can use "perf_hooks" builtin module of Node.js to measure time.您可以使用 Node.js 的"perf_hooks"内置模块来测量时间。 My guess, it's around 1s at max just loading the file into memory, so 59s is spent doing some other stuff.我猜,将文件加载到 memory 最多大约需要 1 秒,所以 59 秒用于做其他事情。 The problem is very likely elsewhere.问题很可能在其他地方。

Now if the bottleneck really is loading file from disk into memory, then mmap syscall might be what you need.现在,如果瓶颈确实是将文件从磁盘加载到 memory,那么mmap系统调用可能就是您所需要的。 That's another topic though.不过那是另一个话题了。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM