简体   繁体   中英

Boost managed_mapped_file: setting maximum allowed memory usage

Is there any way to set the maximum allowed memory used by managed_mapped_file ? For example, I have 64GB of memory and I create a 20GB file. This is all loaded into memory. Is there a way to specify to only use 1GB of memory for example? Even approximately will suffice.

EDIT: I should add I use boost::interprocess::vector .. maybe there is a way to specialize the allocator?

typedef bi::allocator<Node, bi::managed_mapped_file::segment_manager> allocator_node_t;
typedef bi::vector<Node, allocator_node_t> vector_node_t;

bi::managed_mapped_file* nodeFile = new bi::managed_mapped_file(bi::open_or_create, "nodes_m.bin", bigSize);
allocator_node_t alloc_n(nodeFile->get_segment_manager());
vector_node_t* nodes = nodeFile->find_or_construct<vector_node_t>("nodes")(alloc_n);

There's not such a way (portably).

Also the premise is wrong:

For example, I have 64GB of memory and I create a 20GB file. This is all loaded into memory

Wrong : it will load only the pages that are used. Yes, this may mean that you might end up having the full 20GB in memory. The OS is free to do that as long as no other process requires the physical memory for other tasks.

It would be silly for the OS to arbitrarily unmap that data any reason. You want the OS to take advantage of available memory. Otherwise the money on those silicon chips was wasted.

EDIT: I should add I use boost::interprocess::vector.. maybe there is a way to specialize the allocator?

Using boost::interprocess::vector without a custom allocator doesn't use shared memory in the first place. You need to use eg boost::interprocess::allocator<T, boost::interprocess::managed_mapped_file::segment_manager> to use the mapped file in the first place.

And no, nothing in the allocator can override the OS virtual memory tuning parameters.

Nothing needs to be specialized (in the C++ sense of the word)

 bi::managed_mapped_file* nodeFile = new bi::managed_mapped_file(bi::open_or_create, "nodes_m.bin", bigSize); allocator_node_t alloc_n(nodeFile->get_segment_manager()); vector_node_t* nodes = nodeFile->find_or_construct<vector_node_t>("nodes")(alloc_n); 

Executing this code the first time around (ie creating "nodes_m.bin") will not load bigSize . In fact it will not even allocate bigSize on disk! On all systems that support it (I know of no mainstream OS that doesn't) the file is created sparse .

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM