简体   繁体   中英

How to Spark implement the interactive In-Memory Cache?

I wonder if a program ends, the memory used for the piece of program are freed by GC.

And how to cache the data in Spark when I am in a interactive scala interpreter?

Is that mean one time of interpreter is in one process?

But much more often, I use terminal to run the code, not in the interpreter itself, in this case, how can I achieve In-Memory?

无论是使用解释器还是通过命令提示符,都可以使用rdd.cache()在内存中使用keep rdd。

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM