简体   繁体   English

在C中存储内存 - 内存管理

[英]Pooling memory in C - Memory Management

I am writing a cross platform shared library in C. The workflow of this library would be like, 我正在用C编写一个跨平台的共享库。这个库的工作流程就像,

lib *handle = lib_init();
result = lib_do_work(handle);
lib_destroy(handle);

Usually, users will init it once their application starts and closes it when the application ends. 通常,用户将在应用程序启动后初始化它,并在应用程序结束时将其关闭。 lib_do_work() usually gets called multiple times in a second. lib_do_work()通常会在一秒钟内被多次调用。 So to avoid memory allocation and deallocation for each calls, I am using a pooling mechanism. 因此,为了避免每次调用的内存分配和释放,我使用的是池化机制。 With this, I ask the pool to return an instance of the structure that I need. 有了这个,我要求池返回我需要的结构实例。 Pool will return an unused instance or create a new instance if nothing is free. 如果没有空闲,Pool将返回一个未使用的实例或创建一个新实例。 This new instance will also be added to the pool so that it can be used next time. 此新实例也将添加到池中,以便下次可以使用它。

Any API call to my library starts with a function call reset_pool() which makes all elements in the pool usable again. 对我的库的任何API调用都以函数调用reset_pool()开始,这使得池中的所有元素都可以再次使用。 This pool is destroyed as part of lib_destroy() call. 该池作为lib_destroy()调用的一部分被销毁。 In my tests, I observed that sometime my pool is getting 100000+ instances of structure instances. 在我的测试中,我观察到有时我的池正在获得100000多个结构实例实例。

I am wondering is this a good practice in handling the memory? 我想知道这是处理内存的好习惯吗? Any help would be great. 任何帮助都会很棒。

As has already been pointed out in the comments, only profiling will tell you if allocations and deallocations are a bottleneck in your application. 正如评论中已经指出的那样,只有分析会告诉您分配和解除分配是否是您的应用程序中的瓶颈。 Also, if your system only allocates and deallocates the same sized object all the time, then the default implementation will probably perform pretty well. 此外,如果您的系统始终只分配和释放相同大小的对象,那么默认实现可能会很好地执行。

Usually, a pool provides allocation optimization by pre-allocating a block or elements. 通常,池通过预分配一个或多个块来提供分配优化。 The block is carved out into individual elements to satisfy individual allocation requests. 该块被划分为单个元素以满足各个分配请求。 When the pool is depleted, a new block is allocated. 当池耗尽时,将分配一个新块。 This is an allocation optimization since it is cheaper to make fewer calls to the library allocator. 这是一个分配优化,因为减少对库分配器的调用会更便宜。

A pool allocator can also help to reduce fragmentation. 池分配器还可以帮助减少碎片。 If the application allocates and deallocates different sized objects with varying lifetimes, then the chance for fragmentation increases (and the coalescing code in the default allocator has to do more work). 如果应用程序分配和释放具有不同生命周期的不同大小的对象,则碎片化的可能性会增加(并且默认分配器中的合并代码必须执行更多工作)。 If a pool allocator is created for each different sized object, and each pool block was the same size, this would effectively eliminate fragmentation. 如果为每个不同大小的对象创建了池分配器,并且每个池块的大小相同,则可以有效地消除碎片。

(As Felice points out, there is another kind of pool that pre-allocates a fixed amount of memory for the application to use, as a way to ensure the application does not use more memory than it is provisioned.) (正如Felice指出的那样,还有另一种池为应用程序预先分配了一定数量的内存,以确保应用程序不会使用比配置的内存更多的内存。)

On deallocation, individual elements can be placed onto a freelist. 在释放时,可以将单个元素放置在空闲列表中。 But. 但。 your reset_pool implementation can just walk through the blocks, free each one, and then allocate a new block. 您的reset_pool实现可以遍历块,释放每个块,然后分配一个新块。

The following is kind of simplistic. 以下是一种简单化。 It only deals with one kind of element. 它只涉及一种元素。 POOL_SIZE would need to be tuned to be something reasonable for your application. POOL_SIZE需要调整为适合您的应用程序的东西。 Assume data structures like this: 假设数据结构如下:

typedef struct element {
    struct element *next;
    /* ... */
} element;

typedef struct pool_block {
    struct pool_block *next;
    struct element block[POOL_SIZE];
} pool_block;

typedef struct element_pool {
    struct pool_block *pools;
    struct element *freelist;
    int i;
} element_pool;

Then, the API would look something like: 然后,API看起来像:

void pool_init (element_pool *p) { /* ... */ }

element * pool_alloc (element_pool *p) {
    element *e = p->freelist;
    if (e) p->freelist = e->next;
    else do {
        if (p->i < POOL_SIZE) {
            e = &p->pools->block[p->i];
            p->i += 1;
        } else {
            pool_block *b = pool_block_create();
            b->next = p->pools;
            p->pools = b;
            p->i = 0;
        }
    } while (e == 0);
    return e;
}

element * pool_dealloc (element_pool *p, element *e) {
    e->next = p->freelist;
    p->freelist = e;
}

void pool_reset (element_pool *p) {
    pool_block *b;
    while ((b = p->pools)) {
        p->pools = b->next;
        pool_block_destroy(b);
    }
    pool_init(p);
}

我不知道它对于您当前的体系结构是否过于复杂,但通常池会在所有池实例繁忙时限制池化实例和队列请求的数量。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM