简体   繁体   English

在多线程中使用的c ++ 11 shared_ptr

[英]c++11 shared_ptr using in multi-threads

Recently I'm thinking a high performance event-driven multi-threads framework using c++11. 最近,我正在考虑使用c ++ 11的高性能事件驱动的多线程框架。 And it mainly takes c++11 facilities such as std::thread , std::condition_variable , std::mutex , std::shared_ptr etc into consideration. 它主要考虑了c ++ 11设施,例如std::threadstd::condition_variablestd::mutexstd::shared_ptr等。 In general, this framework has three basic components: job, worker and streamline, well, it seems to be a real factory. 通常,此框架具有三个基本组件:作业,工人和精简流程,这似乎是一个真正的工厂。 When user construct his business model in server end, he just needs to consider the data and its processor. 当用户在服务器端构建其业务模型时,他只需要考虑数据及其处理器。 Once the model is established, user only needs to construct data class inherited job and processor class inherited worker. 一旦建立模型,用户只需要构造数据类继承的作业和处理器类继承的工作器。

For example: 例如:

class Data : public job {};
class Processsor : public worker {};

When server get data, it just new a Data object through auto data = std::make_shared<Data>() in the data source callback thread and call the streamline. 服务器获取数据时,它只是通过数据源回调线程中的auto data = std::make_shared<Data>()来新建一个Data对象,并调用流线。 job_dispatch to transfer the processor and data to other thread. job_dispatch将处理器和数据传输到其他线程。 Of course user doesn't have to think to free memory. 当然,用户不必考虑释放内存。 The streamline. 精简。 job_dispatch mainly do below stuff: job_dispatch主要做以下事情:

void evd_thread_pool::job_dispatch(std::shared_ptr<evd_thread_job> job) {
    auto task = std::make_shared<evd_task_wrap>(job);
    task->worker = streamline.worker;  
    // worker has been registered in streamline first of all
    {
        std::unique_lock<std::mutex> lck(streamline.mutex);
        streamline.task_list.push_back(std::move(task));
    }
    streamline.cv.notify_all();
}

The evd_task_wrap used in the job_dispatch defined as: evd_task_wrap使用的job_dispatch定义为:

struct evd_task_wrap {
    std::shared_ptr<evd_thread_job> order;
    std::shared_ptr<evd_thread_processor> worker;
    evd_task_wrap(std::shared_ptr<evd_thread_job>& o)
    :order(o) {}
};

Finally the task_wrap will be dispatched into the processing thread through task_list that is a std::list object. 最后,task_wrap将被分派到经过处理线程task_list这是一个std::list对象。 And the processing thread mainly do the stuff as: 并且处理线程主要将事情做为:

void evd_factory_impl::thread_proc() {
    std::shared_ptr<evd_task_wrap> wrap = nullptr;
    while (true) {
        {
            std::unique_lock<std::mutex> lck(streamline.mutex);
            if (streamline.task_list.empty())
                streamline.cv.wait(lck, 
                [&]()->bool{return !streamline.task_list.empty();});
            wrap = std::move(streamline.task_list.front());
            streamline.task_list.pop_front();
        }
        if (-1 == wrap->order->get_type())
            break;
        wrap->worker->process_task(wrap->order);
        wrap.reset();
    }
}

But I don't know why the process will often crash in the thread_proc function. 但是我不知道为什么该过程经常在thread_proc函数中崩溃。 And the coredump prompt that sometimes the wrap is a empty shared_ptr or segment fault happened in _Sp_counted_ptr_inplace::_M_dispose that is called in wrap.reset(). 并且coredump提示有时换行是空的shared_ptr或段错误,发生在wr_reset()中调用的_Sp_counted_ptr_inplace::_M_dispose中。 And I supposed the shared_ptr has the thread synchronous problem in this scenario while I know the control block in shared_ptr is thread-safety. 我以为在这种情况下shared_ptr存在线程同步问题,而我知道shared_ptr中的控制块是线程安全的。 And of course the shared_ptr in job_dispatch and thread_proc is different shared_ptr object even though they point to the same storage. 当然, job_dispatchthread_proc中的shared_ptr指向相同的存储,但它们是不同的shared_ptr对象。 Does anyone has more specific suggestion on how to solve this problem? 是否有人对如何解决此问题有更具体的建议? Or if there exists similar lightweight framework with automatic memory management using c++11 或者是否存在使用c ++ 11进行自动内存管理的类似轻量级框架


The example of process_task such as: void log_handle::process_task(std::shared_ptr<crx::evd_thread_job> job) { auto j = std::dynamic_pointer_cast<log_job>(job); j->log->Printf(0, j->print_str.c_str()); write(STDOUT_FILENO, j->print_str.c_str(), j->print_str.size()); } class log_factory { public: log_factory(const std::string& name); virtual ~log_factory(); void print_ts(const char *format, ...) { //here dispatch the job char log_buf[4096] = {0}; va_list args; va_start(args, format); vsprintf(log_buf, format, args); va_end(args); auto job = std::make_shared<log_job>(log_buf, &m_log); m_log_th.job_dispatch(job); } public: E15_Log m_log; std::shared_ptr<log_handle> m_log_handle; crx::evd_thread_pool m_log_th; }; process_task的示例,例如: void log_handle::process_task(std::shared_ptr<crx::evd_thread_job> job) { auto j = std::dynamic_pointer_cast<log_job>(job); j->log->Printf(0, j->print_str.c_str()); write(STDOUT_FILENO, j->print_str.c_str(), j->print_str.size()); } class log_factory { public: log_factory(const std::string& name); virtual ~log_factory(); void print_ts(const char *format, ...) { //here dispatch the job char log_buf[4096] = {0}; va_list args; va_start(args, format); vsprintf(log_buf, format, args); va_end(args); auto job = std::make_shared<log_job>(log_buf, &m_log); m_log_th.job_dispatch(job); } public: E15_Log m_log; std::shared_ptr<log_handle> m_log_handle; crx::evd_thread_pool m_log_th; }; void log_handle::process_task(std::shared_ptr<crx::evd_thread_job> job) { auto j = std::dynamic_pointer_cast<log_job>(job); j->log->Printf(0, j->print_str.c_str()); write(STDOUT_FILENO, j->print_str.c_str(), j->print_str.size()); } class log_factory { public: log_factory(const std::string& name); virtual ~log_factory(); void print_ts(const char *format, ...) { //here dispatch the job char log_buf[4096] = {0}; va_list args; va_start(args, format); vsprintf(log_buf, format, args); va_end(args); auto job = std::make_shared<log_job>(log_buf, &m_log); m_log_th.job_dispatch(job); } public: E15_Log m_log; std::shared_ptr<log_handle> m_log_handle; crx::evd_thread_pool m_log_th; };

I detected a problem in your code, which may or may not be related: 我在您的代码中检测到一个问题,该问题可能相关,也可能不相关:

You use notify_all from your condition variable. 您可以从条件变量中使用notify_all That will awaken ALL threads from sleep. 这将唤醒所有线程。 It is OK if you wrap your wait in a while loop, like: 如果将wait时间包装在while循环中,则可以,例如:

while (streamline.task_list.empty())
    streamline.cv.wait(lck, [&]()->bool{return !streamline.task_list.empty();});

But since you are using an if , all threads leave the wait . 但是由于使用的是if ,因此所有线程都将wait If you dispatch a single product and having several consumer threads, all but one thread will call wrap = std::move(streamline.task_list.front()); 如果您分发一个产品并具有多个消费者线程,则除一个线程外的所有线程都将调用wrap = std::move(streamline.task_list.front()); while the tasklist is empty and cause UB. 而任务列表为空并导致UB。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM