简体   繁体   English

Helgrind 在简单的 boost::asio::thread_pool 程序中报告同步错误

[英]Helgrind reports synchronization errors in simple boost::asio::thread_pool program

I'm experimenting with boost::asio::thread_pool and helgrind reports errors in a simple program that has empty task function. Where is the problem and how can I fix it?我正在试验 boost::asio::thread_pool,helgrind 在一个简单的程序中报告错误,该程序具有空任务 function。问题出在哪里,我该如何解决?


#include <boost/thread/mutex.hpp>
#include <boost/thread.hpp>
#include <boost/asio/thread_pool.hpp>
#include <boost/asio/post.hpp>


int main() {
    ushort thread_num = 4;
    boost::asio::thread_pool pool(thread_num);

    auto task = []() {};

    for (ushort i = 0; i < thread_num; ++i)
        boost::asio::post(pool, task);

    pool.join();

    return 0;
}

Here is the helgrind output:这是 helgrind output:

==266706== Thread #1 is the program's root thread
==266706== 
==266706== ----------------------------------------------------------------
==266706== 
==266706== Thread #1: pthread_cond_{signal,broadcast}: dubious: associated lock is not held by any thread
==266706==    at 0x48405D6: ??? (in /usr/lib/x86_64-linux-gnu/valgrind/vgpreload_helgrind-amd64-linux.so)
==266706==    by 0x11508D: bool boost::asio::detail::posix_event::maybe_unlock_and_signal_one<boost::asio::detail::conditionally_enabled_mutex::scoped_lock>(boost::asio::detail::conditionally_enabled_mutex::scoped_lock&) (in /home/arno/Programming/test/a.out)
==266706==    by 0x111AAD: boost::asio::detail::conditionally_enabled_event::maybe_unlock_and_signal_one(boost::asio::detail::conditionally_enabled_mutex::scoped_lock&) (in /home/arno/Programming/test/a.out)
==266706==    by 0x11361E: boost::asio::detail::scheduler::wake_one_thread_and_unlock(boost::asio::detail::conditionally_enabled_mutex::scoped_lock&) (in /home/arno/Programming/test/a.out)
==266706==    by 0x1132C4: boost::asio::detail::scheduler::post_immediate_completion(boost::asio::detail::scheduler_operation*, bool) (in /home/arno/Programming/test/a.out)
==266706==    by 0x10DD99: void boost::asio::thread_pool::executor_type::post<boost::asio::detail::work_dispatcher<main::{lambda()#1}>, std::allocator<void> >(boost::asio::detail::work_dispatcher<main::{lambda()#1}>&&, std::allocator<void> const&) const (in /home/arno/Programming/test/a.out)
==266706==    by 0x10DBFB: void boost::asio::detail::initiate_post::operator()<main::{lambda()#1}&, boost::asio::thread_pool::executor_type const&>(main::{lambda()#1}&, boost::asio::thread_pool::executor_type const&) const (in /home/arno/Programming/test/a.out)
==266706==    by 0x10DB82: void boost::asio::async_result<main::{lambda()#1}, void ()>::initiate<boost::asio::detail::initiate_post, {lambda()#1}&, boost::asio::thread_pool::executor_type const&>(boost::asio::detail::initiate_post&&, {lambda()#1}&, boost::asio::thread_pool::executor_type const&) (in /home/arno/Programming/test/a.out)
==266706==    by 0x10DB54: std::enable_if<void ()::async_result_has_initiate_memfn<main::{lambda()#1}&, void ()>::value, boost::asio::async_result<std::decay<void ()::async_result_has_initiate_memfn>::type, main::{lambda()#1}&>::return_type>::type boost::asio::async_initiate<main::{lambda()#1}&, void (), boost::asio::detail::initiate_post, boost::asio::thread_pool::executor_type const&>(boost::asio::detail::initiate_post&&, void (&)()::async_result_has_initiate_memfn, boost::asio::thread_pool::executor_type const&) (in /home/arno/Programming/test/a.out)
==266706==    by 0x10DB12: boost::asio::async_result<std::decay<main::{lambda()#1}&>::type, void ()>::return_type boost::asio::post<boost::asio::thread_pool::executor_type, main::{lambda()#1}&>(boost::asio::thread_pool::executor_type const&, std::decay&&, std::enable_if<boost::asio::is_executor<boost::asio::async_result<std::decay<main::{lambda()#1}&>::type, void ()>::return_type>::value, void>::type*) (in /home/arno/Programming/test/a.out)
==266706==    by 0x10DABD: boost::asio::async_result<std::decay<main::{lambda()#1}&>::type, void ()>::return_type boost::asio::post<boost::asio::thread_pool, main::{lambda()#1}&>(boost::asio::thread_pool&, std::decay&&, std::enable_if<std::is_convertible<boost::asio::thread_pool, boost::asio::execution_context&>::value, void>::type*) (in /home/arno/Programming/test/a.out)
==266706==    by 0x10DA11: main (in /home/arno/Programming/test/a.out)

https://linux.die.net/man/3/pthread_cond_signal https://linux.die.net/man/3/pthread_cond_signal

The pthread_cond_broadcast() or pthread_cond_signal() functions may be called by a thread whether or not it currently owns the mutex that threads calling pthread_cond_wait() or pthread_cond_timedwait() have associated with the condition variable during their waits;线程可以调用 pthread_cond_broadcast() 或 pthread_cond_signal() 函数,无论它当前是否拥有调用 pthread_cond_wait() 或 pthread_cond_timedwait() 的线程在等待期间与条件变量相关联的互斥体; however, if predictable scheduling behavior is required, then that mutex shall be locked by the thread calling pthread_cond_broadcast() or pthread_cond_signal().但是,如果需要可预测的调度行为,则该互斥量应由调用 pthread_cond_broadcast() 或 pthread_cond_signal() 的线程锁定。

It's just a diagnostic warning indicating a common cause for inefficient scheduling around condition variables.这只是一个诊断警告,指示围绕条件变量进行低效调度的常见原因。 You get spurious wakes if you (as a producer) don't keep the mutex locked while signaling.如果您(作为生产者)在发出信号时没有保持互斥锁锁定,您会得到虚假的唤醒。 Eg it's possible that your update of the CV variable has been processed as soon as you unlocked the mutex, so your signal is then causing another unnecessary wakeup without any changed state.例如,您对 CV 变量的更新可能在您解锁互斥量后立即得到处理,因此您的信号会导致另一个不必要的唤醒,而 state 没有任何更改。

Keeping the mutex locked for the whole update of the CV variable as well as signaling in contrast is handled efficiently, as the consumer has registered pairwise on both the mutex and CV, and signaling the CV directly puts the consumer on the wait-list for the mutex without activating the thread even once.在 CV 变量的整个更新过程中保持互斥锁锁定以及相反的信号发送是有效处理的,因为消费者已经在互斥锁和 CV 上成对注册,并且直接发送 CV 信号将消费者放在等待列表中mutex 甚至一次都不激活线程。

That's just inefficient though, not a logic error.但这只是效率低下,而不是逻辑错误。 And Helgrind only reported it as "dubious", not as an error.而 Helgrind 仅将其报告为“可疑”,而不是错误。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 在 lambda 函数中捕获 boost::asio::thread_pool - Capturing boost::asio::thread_pool in lambda function 如何处理:如果 boost::asio::post 无休止地重复,当 boost::asio::thread_pool 析构函数被触发时? - How to deal: if boost::asio::post is endlessly repeated, when boost::asio::thread_pool destructor is triggered? 可以使用 boost::asio::thread_pool 而不是将 boost::asio::io_context 与 boost::thread::thread_group 结合使用吗? - Can boost::asio::thread_pool be used instead of combining boost::asio::io_context with a boost::thread::thread_group? 在多个线程上发布任务时 boost::asio::thread_pool 线程安全吗? - Is boost::asio::thread_pool thread safe when posting tasks on multiple threads? 我的 boost::asio::thread_pool 中的线程 ID 始终相同 - Thread-ID is always the same in my boost::asio::thread_pool 等到发布到 boost::asio::thread_pool 的作业(与所有作业完全相反)完成? - Wait until A job (as starkly opposed to ALL jobs) posted to boost::asio::thread_pool completes? C++线程池使用boost::asio::thread_pool,为什么我不能重用我的线程? - C++ thread pool using boost::asio::thread_pool, why can't I reuse my threads? asio :: thread_pool甚至在调用构造函数之前就失败了 - asio::thread_pool fails before constructor is even called 协程不分布在 asio::thread_pool 线程上 - Coroutines are not distributed over asio::thread_pool threads 在循环中将工作排队到 Boost Thread_Pool - Queueing work into a Boost Thread_Pool within a loop
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM