简体   繁体   中英

Running fixed number of threads

With the new standards of c++17 I wonder if there is a good way to start a process with a fixed number of threads until a batch of jobs are finished.

Can you tell me how I can achieve the desired functionality of this code:

std::vector<std::future<std::string>> futureStore;
const int batchSize             = 1000;
const int maxNumParallelThreads = 10;
int threadsTerminated           = 0;

while(threadsTerminated < batchSize)
{
    const int& threadsRunning = futureStore.size();
    while(threadsRunning < maxNumParallelThreads)
    {
        futureStore.emplace_back(std::async(someFunction));
    }
    for(std::future<std::string>& readyFuture: std::when_any(futureStore.begin(), futureStore.end()))
    {
        auto retVal = readyFuture.get(); 
        // (possibly do something with the ret val)
        threadsTerminated++;
    }
} 

I read, that there used to be an std::when_any function, but it was a feature that did make it getting into the std features.

Is there any support for this functionality (not necessarily for std::future -s) in the current standard libraries? Is there a way to easily implement it, or do I have to resolve to something like this ?

This does not seem to me to be the ideal approach:

  1. All your main thread does is waiting for your other threads finishing, polling the results of your future. Almost wasting this thread somehow...

  2. I don't know in how far std::async re-uses the threads' infrastructures in any suitable way, so you risk creating entirely new threads each time... (apart from that you might not create any threads at all, see here , if you do not specify std::launch::async explicitly.

I personally would prefer another approach:

  1. Create all the threads you want to use at once.
  2. Let each thread run a loop, repeatedly calling someFunction(), until you have reached the number of desired tasks.

The implementation might look similar to this example:

const int BatchSize = 20;
int tasksStarted = 0;
std::mutex mutex;
std::vector<std::string> results;

std::string someFunction()
{
    puts("worker started"); fflush(stdout);
    sleep(2);
    puts("worker done"); fflush(stdout);
    return "";
}

void runner()
{
    {
        std::lock_guard<std::mutex> lk(mutex);
        if(tasksStarted >= BatchSize)
            return;
        ++tasksStarted;
    }
    for(;;)
    {
        std::string s = someFunction();
        {
            std::lock_guard<std::mutex> lk(mutex);
            results.push_back(s);
            if(tasksStarted >= BatchSize)
                break;
            ++tasksStarted;
        }
    }
}

int main(int argc, char* argv[])
{
    const int MaxNumParallelThreads = 4;

    std::thread threads[MaxNumParallelThreads - 1]; // main thread is one, too!
    for(int i = 0; i < MaxNumParallelThreads - 1; ++i)
    {
        threads[i] = std::thread(&runner);
    }
    runner();

    for(int i = 0; i < MaxNumParallelThreads - 1; ++i)
    {
        threads[i].join();
    }

    // use results...

    return 0;
}

This way, you do not recreate each thread newly, but just continue until all tasks are done.

If these tasks are not all all alike as in above example, you might create a base class Task with a pure virtual function (eg "execute" or "operator ()") and create subclasses with the implementation required (and holding any necessary data).

You could then place the instances into a std::vector or std::list (well, we won't iterate, list might be appropriate here...) as pointers (otherwise, you get type erasure!) and let each thread remove one of the tasks when it has finished its previous one (do not forget to protect against race conditions!) and execute it. As soon as no more tasks are left, return...

If you dont care about the exact number of threads, the simplest solution would be:

std::vector<std::future<std::string>> futureStore(
    batchSize
);

std::generate(futureStore.begin(), futureStore.end(), [](){return std::async(someTask);});


for(auto& future : futureStore) {
    std::string value = future.get();
    doWork(value);
}

From my experience, std::async will reuse the threads, after a certain amount of threads is spawend. It will not spawn 1000 threads. Also, you will not gain much of a performance boost (if any), when using a threadpool. I did measurements in the past, and the overall runtime was nearly identical.

The only reason, I use threadpools now, is to avoid the delay for creating threads in the computation loop. If you have timing constraints, you may miss deadlines, when using std::async for the first time, since it will create the threads on the first calls.

There is a good thread pool library for these applications. Have a look here: https://github.com/vit-vit/ctpl

#include <ctpl.h>

const unsigned int numberOfThreads = 10;
const unsigned int batchSize = 1000;

ctpl::thread_pool pool(batchSize /* two threads in the pool */);
std::vector<std::future<std::string>> futureStore(
    batchSize
);

std::generate(futureStore.begin(), futureStore.end(), [](){ return pool.push(someTask);});

for(auto& future : futureStore) {
    std::string value = future.get();
    doWork(value);
}

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM