简体   繁体   中英

c++11 shared_ptr using in multi-threads

Recently I'm thinking a high performance event-driven multi-threads framework using c++11. And it mainly takes c++11 facilities such as std::thread , std::condition_variable , std::mutex , std::shared_ptr etc into consideration. In general, this framework has three basic components: job, worker and streamline, well, it seems to be a real factory. When user construct his business model in server end, he just needs to consider the data and its processor. Once the model is established, user only needs to construct data class inherited job and processor class inherited worker.

For example:

class Data : public job {};
class Processsor : public worker {};

When server get data, it just new a Data object through auto data = std::make_shared<Data>() in the data source callback thread and call the streamline. job_dispatch to transfer the processor and data to other thread. Of course user doesn't have to think to free memory. The streamline. job_dispatch mainly do below stuff:

void evd_thread_pool::job_dispatch(std::shared_ptr<evd_thread_job> job) {
    auto task = std::make_shared<evd_task_wrap>(job);
    task->worker = streamline.worker;  
    // worker has been registered in streamline first of all
    {
        std::unique_lock<std::mutex> lck(streamline.mutex);
        streamline.task_list.push_back(std::move(task));
    }
    streamline.cv.notify_all();
}

The evd_task_wrap used in the job_dispatch defined as:

struct evd_task_wrap {
    std::shared_ptr<evd_thread_job> order;
    std::shared_ptr<evd_thread_processor> worker;
    evd_task_wrap(std::shared_ptr<evd_thread_job>& o)
    :order(o) {}
};

Finally the task_wrap will be dispatched into the processing thread through task_list that is a std::list object. And the processing thread mainly do the stuff as:

void evd_factory_impl::thread_proc() {
    std::shared_ptr<evd_task_wrap> wrap = nullptr;
    while (true) {
        {
            std::unique_lock<std::mutex> lck(streamline.mutex);
            if (streamline.task_list.empty())
                streamline.cv.wait(lck, 
                [&]()->bool{return !streamline.task_list.empty();});
            wrap = std::move(streamline.task_list.front());
            streamline.task_list.pop_front();
        }
        if (-1 == wrap->order->get_type())
            break;
        wrap->worker->process_task(wrap->order);
        wrap.reset();
    }
}

But I don't know why the process will often crash in the thread_proc function. And the coredump prompt that sometimes the wrap is a empty shared_ptr or segment fault happened in _Sp_counted_ptr_inplace::_M_dispose that is called in wrap.reset(). And I supposed the shared_ptr has the thread synchronous problem in this scenario while I know the control block in shared_ptr is thread-safety. And of course the shared_ptr in job_dispatch and thread_proc is different shared_ptr object even though they point to the same storage. Does anyone has more specific suggestion on how to solve this problem? Or if there exists similar lightweight framework with automatic memory management using c++11


The example of process_task such as: void log_handle::process_task(std::shared_ptr<crx::evd_thread_job> job) { auto j = std::dynamic_pointer_cast<log_job>(job); j->log->Printf(0, j->print_str.c_str()); write(STDOUT_FILENO, j->print_str.c_str(), j->print_str.size()); } class log_factory { public: log_factory(const std::string& name); virtual ~log_factory(); void print_ts(const char *format, ...) { //here dispatch the job char log_buf[4096] = {0}; va_list args; va_start(args, format); vsprintf(log_buf, format, args); va_end(args); auto job = std::make_shared<log_job>(log_buf, &m_log); m_log_th.job_dispatch(job); } public: E15_Log m_log; std::shared_ptr<log_handle> m_log_handle; crx::evd_thread_pool m_log_th; }; void log_handle::process_task(std::shared_ptr<crx::evd_thread_job> job) { auto j = std::dynamic_pointer_cast<log_job>(job); j->log->Printf(0, j->print_str.c_str()); write(STDOUT_FILENO, j->print_str.c_str(), j->print_str.size()); } class log_factory { public: log_factory(const std::string& name); virtual ~log_factory(); void print_ts(const char *format, ...) { //here dispatch the job char log_buf[4096] = {0}; va_list args; va_start(args, format); vsprintf(log_buf, format, args); va_end(args); auto job = std::make_shared<log_job>(log_buf, &m_log); m_log_th.job_dispatch(job); } public: E15_Log m_log; std::shared_ptr<log_handle> m_log_handle; crx::evd_thread_pool m_log_th; };

I detected a problem in your code, which may or may not be related:

You use notify_all from your condition variable. That will awaken ALL threads from sleep. It is OK if you wrap your wait in a while loop, like:

while (streamline.task_list.empty())
    streamline.cv.wait(lck, [&]()->bool{return !streamline.task_list.empty();});

But since you are using an if , all threads leave the wait . If you dispatch a single product and having several consumer threads, all but one thread will call wrap = std::move(streamline.task_list.front()); while the tasklist is empty and cause UB.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM