简体   繁体   English

在不同线程中分配和释放数据时,进程的虚拟 memory 和物理 memory 将引发

[英]virtual memory and physical memory of the process will raise when data was allocated and released in different thread

I have a memory allocate problem with Ubuntu 18.04.我有一个 memory 分配问题 Ubuntu 18.04。
(1) i allocate some data memory in thread 1, (1)我在线程1中分配了一些数据memory,
(2) release the data and allocate the same data to the same object again in thread 2, (2)在线程2中再次释放数据并将相同的数据分配给相同的object,
then in the step(2) the virtual memory and physcial memory will be raise as twice as the memory in step(1).然后在步骤 (2) 中,虚拟 memory 和物理 memory 将提升为步骤 (1) 中 memory 的两倍。 i have use the share_ptr to manage the memory, and also run the process with valgrind,so i am sure there in no memory leak.我已经使用 share_ptr 来管理 memory,并且还使用 valgrind 运行该过程,所以我确信没有 memory 泄漏。 but i am wonder why the memory of the process will raise so much?但我想知道为什么过程的memory会提高这么多? is three any method to remove the memory raise?有没有三种方法可以去除 memory 提升?

there is my example code, with the two level hierarchical-pymrid structure, the top level is LGMemory with a 2D vector of shared_ptr, the second level is the Patch Class which contain the real data.有我的示例代码,具有两级分层pymrid结构,顶层是具有shared_ptr的二维向量的LGMemory,第二级是包含真实数据的补丁Class。 the step(2) is execute in the LGMemory::updateData() in a thread.i have test different case in the code,in the case 1,3,5 the memory of process raise in steep (2), but in the case 2,4,6, the memory don't raise.why?步骤(2)在线程中的 LGMemory::updateData() 中执行。我在代码中测试了不同的情况,在情况 1、3、5 中,进程的 memory 急剧上升(2),但在案例 2,4,6,memory 不升高。为什么?
any hint about the problem will be great helpful for me,thx~任何关于这个问题的提示都会对我有很大帮助,thx~

#include <iostream>
#include <thread>
#include <functional>
#include <vector>
#include <memory>
#include <unistd.h>
#include <set>
void waitFor(int n)
{
    int ii =n;
    while((ii--)>0)
    {
        sleep(1);
        continue;
    }
}

class Cell
{
    public:
 
    Cell(){};
    Cell(int x, int y){};
    ~Cell(){};

    private:

    double acc, va, mNew, vxy;
    std::vector<double> hitP;
    double a[5];
    double b[5];
    int n, visits, laser_count_, nNew, visitsNew, laser_count_NEW, firstMapId, firstMapIdNEW;
    std::set<int> hitedMapIds;
};
class Cell2{

    public:
    long a[10];
};

template<class T=Cell>
class Patch{
    public:
    Patch(){};
     Patch(size_t x, size_t y)
     {
        data_.resize(x);
        for(size_t ii=0; ii<data_.size();++ii)
        {
            data_[ii].resize(y);
        }
     };

    ~Patch()
    {
        for(size_t ii=0; ii<data_.size();++ii)
        {
            std::vector<T>().swap(data_[ii]);
        }
        std::vector<std::vector<T> >().swap(data_);
    };

    std::vector<std::vector<T> > data_; 
};

template<class T>
class LGMemory
{
    public:
    LGMemory();
    ~LGMemory();

    void resize(int xLen, int yLen);
    void fillData();
    void clearData();
    void updateData();
    int status_;
    std::thread* thread_;

    private:
    
    std::vector<std::vector<std::shared_ptr<T> > > data_;
    int xLength_;
    int yLength_;
};

template<class T>
LGMemory<T>::LGMemory():thread_(nullptr),xLength_(0),yLength_(0),status_(0)
{
    data_.resize(xLength_);
    for(auto outVec:data_)
    {
        outVec.resize(yLength_);
        for(auto inItem:outVec )
        {
            inItem = NULL;
        }
    }
    thread_ = new std::thread(&LGMemory::updateData,this);
}

template<class T>
LGMemory<T>::~LGMemory()
{
    if(thread_)
    {
        delete thread_;
    }
    clearData();

}
template <class T>
void LGMemory<T>::resize(int xLen, int yLen)
{
    xLength_ = xLen;
    yLength_ = yLen;
    data_.resize(xLength_);
    for (size_t ii = 0; ii < data_.size(); ++ii)
    {
        data_[ii].resize(yLength_);
        for (size_t jj = 0; jj < data_[ii].size(); ++jj)
        {
            data_[ii][jj] = NULL;
        }
    }
}

template<class T>
void LGMemory<T>::fillData()
{
    for(size_t ii=0; ii<data_.size();++ii)
    {
        for(size_t jj=0;jj<data_[ii].size();++jj)
        {
            data_[ii][jj] =std::make_shared<T>(32,32);
        }
    }

}

template<class T>
void LGMemory<T>::clearData()
{
    for(size_t ii=0; ii<data_.size();++ii)
    {
        std::vector<std::shared_ptr<T> >().swap(data_[ii]);
    }
    std::vector<std::vector<std::shared_ptr<T> > >().swap(data_);
}

template<class T>
void LGMemory<T>::updateData()
{
    while(status_ != 5102)
    { 
        sleep(1);
        continue;
    }

    clearData();
    std::cout<<" updateData ,clear finish, wait for fill data..."<<std::endl;
    waitFor(1);
    resize(xLength_,yLength_);

    fillData();
    std::cout<<" updateData , fill data finish..."<<std::endl;
}

int main(int argc,char** argv)
{
    std::cout<<"start test ..."<<std::endl;
    
    /// case 1 
    LGMemory<Patch< > > lg;
    lg.resize(50,50);
    
    //// case 2
    // LGMemory<Patch< > > lg;
    // lg.resize(100,100);

    //// case 3
    //LGMemory<Patch< Cell2 > > lg;
    //lg.resize(50,50);

    //// case 4
    // LGMemory<Patch< Cell2 > > lg;
    // lg.resize(100,100);

    ////case 5
    //LGMemory<Patch< long > > lg;
    //lg.resize(50,50);

    ////case 6
    //LGMemory<Patch< long > > lg;
    //lg.resize(100,100);


    lg.fillData();

    waitFor(10);
    std::cout<<" lg update data begin...."<<std::endl;
    lg.status_ =5102;
    /// updateData execute in another thread 
    // wait the updateData method execute finish. 
    waitFor(20);

    lg.thread_->join();
    return 0;
}

there is the CMakeLists.txt有 CMakeLists.txt

cmake_minimum_required(VERSION 3.0)
Project(MemoryTest)

#find_package(Boost REQUIRED COMPONENTS thread)
add_executable( MemoryTest MemoryTest2.cpp)
target_link_libraries(MemoryTest  pthread )

It is not uncommon that memory managers use thread-local pools to avoid contention on some global data structure (which would need to be protected by a lock). memory 管理器使用线程本地池来避免对某些全局数据结构(需要用锁保护)的争用并不少见。 So even if thread 1 releases its memory, this memory is only returned to the local memory pool of thread 1. Now if thread 2 wants to allocate some memory, it has to allocate its own pool. So even if thread 1 releases its memory, this memory is only returned to the local memory pool of thread 1. Now if thread 2 wants to allocate some memory, it has to allocate its own pool.

I assume that you use the standard glibc malloc which is derived from ptmalloc .我假设您使用源自ptmalloc的标准 glibc malloc 。 Alternatively you can try using some other memory manager like jemalloc , tcmalloc , Hoard or mimalloc .或者,您可以尝试使用其他一些 memory 管理器,例如jemalloctcmallocHoardmimalloc Though most of them use thread-local pools (AFAIK), there are differences in how these are handled, so you will probably see differences in memory usage and performance.尽管它们中的大多数使用线程本地池 (AFAIK),但它们的处理方式存在差异,因此您可能会看到 memory 使用和性能方面的差异。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM