![](/img/trans.png)
[英]Why does std::condition_variable as a class member cause compile errors with std::thread?
[英]Why does std::condition_variable make scheduling unfair?
我正在嘗試創建一個簡單的池對象,我希望或多或少公平地將對一組共享資源的訪問權限分配給任何要求它的線程。 在Windows中,我通常會有一個Mutexes數組並執行WaitForMultipleObjects,bWaitAll = FALSE(參見下面的windows_pool_of_n_t)。 但我希望有一天能夠將其移植到其他操作系統,所以我想堅持使用標准。 一個資源的deque,大小()!= 0的condition_variable似乎是一個明顯的解決方案(參見下面的pool_of_n_t)。
但由於我不明白的原因,該代碼序列化了線程訪問。 我並不期望嚴格公平,但這幾乎是最糟糕的情況 - 上次鎖定的線程總是在下次獲得鎖定。 這並不是說std :: mutex不符合Windows或多或少的公平調度,因為只使用沒有條件變量的互斥鎖可以按預期工作,當然只有一個池,當然(參見下面的pool_of_one_t)。
有誰能解釋一下? 有沒有解決的辦法?
結果:
C:\temp\stdpool>bin\stdpool.exe
pool:pool_of_one_t
thread 0:19826 ms
thread 1:19846 ms
thread 2:19866 ms
thread 3:19886 ms
thread 4:19906 ms
thread 5:19926 ms
thread 6:19946 ms
thread 7:19965 ms
thread 8:19985 ms
thread 9:20004 ms
pool:windows_pool_of_n_t(1)
thread 0:19819 ms
thread 1:19838 ms
thread 2:19858 ms
thread 3:19878 ms
thread 4:19898 ms
thread 5:19918 ms
thread 6:19938 ms
thread 7:19958 ms
thread 8:19978 ms
thread 9:19997 ms
pool:pool_of_n_t(1)
thread 9:3637 ms
thread 0:4538 ms
thread 6:7558 ms
thread 4:9779 ms
thread 8:9997 ms
thread 2:13058 ms
thread 1:13997 ms
thread 3:17076 ms
thread 5:17995 ms
thread 7:19994 ms
pool:windows_pool_of_n_t(2)
thread 1:9919 ms
thread 0:9919 ms
thread 2:9939 ms
thread 3:9939 ms
thread 5:9958 ms
thread 4:9959 ms
thread 6:9978 ms
thread 7:9978 ms
thread 9:9997 ms
thread 8:9997 ms
pool:pool_of_n_t(2)
thread 2:6019 ms
thread 0:7882 ms
thread 4:8102 ms
thread 5:8182 ms
thread 1:8382 ms
thread 8:8742 ms
thread 7:9162 ms
thread 9:9641 ms
thread 3:9802 ms
thread 6:10201 ms
pool:windows_pool_of_n_t(5)
thread 4:3978 ms
thread 3:3978 ms
thread 2:3979 ms
thread 0:3980 ms
thread 1:3980 ms
thread 9:3997 ms
thread 7:3999 ms
thread 6:3999 ms
thread 5:4000 ms
thread 8:4001 ms
pool:pool_of_n_t(5)
thread 2:3080 ms
thread 0:3498 ms
thread 8:3697 ms
thread 3:3699 ms
thread 6:3797 ms
thread 7:3857 ms
thread 1:3978 ms
thread 4:4039 ms
thread 9:4057 ms
thread 5:4059 ms
編碼:
#include <iostream>
#include <deque>
#include <vector>
#include <mutex>
#include <thread>
#include <sstream>
#include <chrono>
#include <iomanip>
#include <cassert>
#include <condition_variable>
#include <windows.h>
using namespace std;
class pool_t {
public:
virtual void check_in(int size) = 0;
virtual int check_out() = 0;
virtual string pool_name() = 0;
};
class pool_of_one_t : public pool_t {
mutex lock;
public:
virtual void check_in(int resource) {
lock.unlock();
}
virtual int check_out() {
lock.lock();
return 0;
}
virtual string pool_name() {
return "pool_of_one_t";
}
};
class windows_pool_of_n_t : public pool_t {
vector<HANDLE> resources;
public:
windows_pool_of_n_t(int size) {
for (int i=0; i < size; ++i)
resources.push_back(CreateMutex(NULL, FALSE, NULL));
}
~windows_pool_of_n_t() {
for (auto resource : resources)
CloseHandle(resource);
}
virtual void check_in(int resource) {
ReleaseMutex(resources[resource]);
}
virtual int check_out() {
DWORD result = WaitForMultipleObjects(resources.size(),
resources.data(), FALSE, INFINITE);
assert(result >= WAIT_OBJECT_0
&& result < WAIT_OBJECT_0+resources.size());
return result - WAIT_OBJECT_0;
}
virtual string pool_name() {
ostringstream name;
name << "windows_pool_of_n_t(" << resources.size() << ")";
return name.str();
}
};
class pool_of_n_t : public pool_t {
deque<int> resources;
mutex lock;
condition_variable not_empty;
public:
pool_of_n_t(int size) {
for (int i=0; i < size; ++i)
check_in(i);
}
virtual void check_in(int resource) {
unique_lock<mutex> resources_guard(lock);
resources.push_back(resource);
resources_guard.unlock();
not_empty.notify_one();
}
virtual int check_out() {
unique_lock<mutex> resources_guard(lock);
not_empty.wait(resources_guard,
[this](){return resources.size() > 0;});
auto resource = resources.front();
resources.pop_front();
bool notify_others = resources.size() > 0;
resources_guard.unlock();
if (notify_others)
not_empty.notify_one();
return resource;
}
virtual string pool_name() {
ostringstream name;
name << "pool_of_n_t(" << resources.size() << ")";
return name.str();
}
};
void worker_thread(int id, pool_t& resource_pool)
{
auto start_time = chrono::system_clock::now();
for (int i=0; i < 100; ++i) {
auto resource = resource_pool.check_out();
this_thread::sleep_for(chrono::milliseconds(20));
resource_pool.check_in(resource);
this_thread::yield();
}
static mutex cout_lock;
{
unique_lock<mutex> cout_guard(cout_lock);
cout << "thread " << id << ":"
<< chrono::duration_cast<chrono::milliseconds>(
chrono::system_clock::now() - start_time).count()
<< " ms" << endl;
}
}
void test_it(pool_t& resource_pool)
{
cout << "pool:" << resource_pool.pool_name() << endl;
vector<thread> threads;
for (int i=0; i < 10; ++i)
threads.push_back(thread(worker_thread, i, ref(resource_pool)));
for (auto& thread : threads)
thread.join();
}
int main(int argc, char* argv[])
{
test_it(pool_of_one_t());
test_it(windows_pool_of_n_t(1));
test_it(pool_of_n_t(1));
test_it(windows_pool_of_n_t(2));
test_it(pool_of_n_t(2));
test_it(windows_pool_of_n_t(5));
test_it(pool_of_n_t(5));
return 0;
}
我在Linux上做了你的測試pool:pool_of_n_t(2)
並查看了問題
this_thread::yield();
在我的comp上查看測試池的結果:pool_of_n_t(2):
1)this_thread :: yield():
$./a.out
pool:pool_of_n_t(2)
thread 0, run for:2053 ms
thread 9, run for:3721 ms
thread 5, run for:4830 ms
thread 6, run for:6854 ms
thread 3, run for:8229 ms
thread 4, run for:8353 ms
thread 7, run for:9441 ms
thread 2, run for:9482 ms
thread 1, run for:10127 ms
thread 8, run for:10426 ms
它們與你的相似。
2)當我用pthread_yield()
替換this_thread::yield()
時的相同測試:
$ ./a.out
pool:pool_of_n_t(2)
thread 0, run for:7922 ms
thread 3, run for:8853 ms
thread 4, run for:8854 ms
thread 1, run for:9077 ms
thread 5, run for:9364 ms
thread 9, run for:9446 ms
thread 7, run for:9594 ms
thread 2, run for:9615 ms
thread 8, run for:10170 ms
thread 6, run for:10416 ms
這更公平。 您假設this_thread :: yield()確實將CPU提供給另一個線程,但它沒有給出它。
這是gcc 4.8的this_thread :: yield的disas:
(gdb) disassembl this_thread::yield
Dump of assembler code for function std::this_thread::yield():
0x0000000000401fb2 <+0>: push %rbp
0x0000000000401fb3 <+1>: mov %rsp,%rbp
0x0000000000401fb6 <+4>: pop %rbp
0x0000000000401fb7 <+5>: retq
End of assembler dump.
我沒有看到任何重新安排
這是pthread_yield的disas:
(gdb) disassemble pthread_yield
Dump of assembler code for function pthread_yield:
0x0000003149c084c0 <+0>: jmpq 0x3149c05448 <sched_yield@plt>
End of assembler dump.
(gdb) disassemble sched_yield
Dump of assembler code for function sched_yield:
0x00000031498cf520 <+0>: mov $0x18,%eax
0x00000031498cf525 <+5>: syscall
0x00000031498cf527 <+7>: cmp $0xfffffffffffff001,%rax
0x00000031498cf52d <+13>: jae 0x31498cf530 <sched_yield+16>
0x00000031498cf52f <+15>: retq
0x00000031498cf530 <+16>: mov 0x2bea71(%rip),%rcx # 0x3149b8dfa8
0x00000031498cf537 <+23>: xor %edx,%edx
0x00000031498cf539 <+25>: sub %rax,%rdx
0x00000031498cf53c <+28>: mov %edx,%fs:(%rcx)
0x00000031498cf53f <+31>: or $0xffffffffffffffff,%rax
0x00000031498cf543 <+35>: jmp 0x31498cf52f <sched_yield+15>
End of assembler dump.
我不認為條件變量是罪魁禍首。
Linux “完全公平隊列”和Windows線程調度程序都假設理想的目標是為每個線程提供整個時間片(即公平)。他們認為,如果一個線程在消耗之前產生它它的整個時間片靠近隊列的前面[這是一個粗略的簡化],因為這是“公平”的事情。
我發現這非常不幸。 如果你有三個線程,其中一個可以工作而另外兩個被阻塞等待那個線程,Windows和Linux調度程序將在阻塞的線程之間來回反復多次,然后給“正確的”線程一個機會。
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.