简体   繁体   English

如何限制C ++中正在运行的实例数量

[英]How to limit the number of running instances in C++

I have a c++ class that allocates a lot of memory. 我有一个c ++类,分配了大量的内存。 It does this by calling a third-party library that is designed to crash if it cannot allocate the memory, and sometimes my application creates several instances of my class in parallel threads. 它通过调用第三方库来实现这一点,如果它无法分配内存,则会崩溃,有时我的应用程序会在并行线程中创建我的类的多个实例。 With too many threads I have a crash. 由于线程太多,我遇到了崩溃。 My best idea for a solution is to make sure that there are never, say, more than three instances running at the same time. 我最好的解决方案是确保从不会有超过三个实例同时运行。 (Is this a good idea?) And my current best idea for implementing that is to use a boost mutex. (这是一个好主意吗?)和我的执行目前最好的办法是使用升压互斥。 Something along the lines of the following pseudo-code, 以下伪代码的行,

MyClass::MyClass(){
  my_thread_number = -1; //this is a class variable
  while (my_thread_number == -1)
    for (int i=0; i < MAX_PROCESSES; i++)
      if(try_lock a mutex named i){
        my_thread_number = i;
        break;
      }
  //Now I know that my thread has mutex number i and it is allowed to run
}

MyClass::~MyClass(){
    release mutex named my_thread_number
}

As you see, I am not quite sure of the exact syntax for mutexes here.. So summing up, my questions are 如你所见,我不太确定互斥体的确切语法。所以总结一下,我的问题是

  1. Am I on the right track when I want to solve my memory error by limiting the number of threads? 当我想通过限制线程数来解决内存错误时,我是否在正确的轨道上?
  2. If yes, Should I do it with mutexes or by other means? 如果是,我应该使用互斥或​​其他方式吗?
  3. If yes, Is my algorithm sound? 如果是,我的算法是否合理?
  4. Is there a nice example somewhere of how to use try_lock with boost mutexes? 有一个很好的例子,如何使用try_lock与boost互斥量?

Edit: I realized I am talking about threads, not processes. 编辑:我意识到我在谈论线程,而不是进程。 Edit: I am involved in building an application that can run on both linux and Windows... 编辑:我参与构建一个可以在Linux和Windows上运行的应用程序...

UPDATE My other answer addresses scheduling resources among threads (after the question was clarified). 更新我的另一个答案解决了线程之间的调度资源(在澄清问题之后)。

It shows both a semaphore approach to coordinate work among (many) workers, and a thread_pool to limit workers in the first place and queue the work. 它显示了在(许多)工作者之间协调工作的信号量方法,以及首先限制工作者并将工作排队的thread_pool

On linux (and perhaps other OSes?) you can use a lock file idiom (but it's not supported with some file-systems and old kernels). 在linux(也许还有其他操作系统?)上,您可以使用锁定文件习惯用法(但某些文件系统和旧内核不支持它)。

I would suggest to use Interprocess synchronisation objects. 我建议使用Interprocess同步对象。

Eg, using a Boost Interprocess named semaphore: 例如,使用名为信号量的Boost Interprocess:

#include <boost/interprocess/sync/named_semaphore.hpp>
#include <boost/thread.hpp>
#include <cassert>

int main()
{
    using namespace boost::interprocess;
    named_semaphore sem(open_or_create, "ffed38bd-f0fc-4f79-8838-5301c328268c", 0ul);

    if (sem.try_wait())
    {
        std::cout << "Oops, second instance\n";
    }
    else
    {
        sem.post();

        // feign hard work for 30s
        boost::this_thread::sleep_for(boost::chrono::seconds(30));

        if (sem.try_wait())
        {
            sem.remove("ffed38bd-f0fc-4f79-8838-5301c328268c");
        }
    }
}

If you start one copy in the back ground, new copies will "refuse" to start ("Oops, second instance") for about 30s. 如果你在后台开始一个副本,新副本将“拒绝”开始(“哎呀,二次”)大约30秒。

I have a feeling it might be easier to reverse the logic here. 我觉得在这里扭转逻辑可能更容易。 Mmm. 嗯。 Lemme try. 勒米试试。

some time passes 一段时间过去了

Hehe. 呵呵。 That was more tricky than I thought. 这比我想象的要复杂得多。

The thing is, you want to make sure that the lock doesn't remain when your application is interrupted or killed. 问题是,您希望确保在应用程序中断或终止时锁不会保留。 In the interest of sharing the techniques for portably handling the signals: 为了分享便携式处理信号的技术:

#include <boost/interprocess/sync/named_semaphore.hpp>
#include <boost/thread.hpp>
#include <cassert>
#include <boost/asio.hpp>

#define MAX_PROCESS_INSTANCES 3

boost::interprocess::named_semaphore sem(
        boost::interprocess::open_or_create, 
        "4de7ddfe-2bd5-428f-b74d-080970f980be",
        MAX_PROCESS_INSTANCES);

// to handle signals:
boost::asio::io_service service;
boost::asio::signal_set sig(service);

int main()
{

    if (sem.try_wait())
    {
        sig.add(SIGINT);
        sig.add(SIGTERM);
        sig.add(SIGABRT);
        sig.async_wait([](boost::system::error_code,int sig){ 
                std::cerr << "Exiting with signal " << sig << "...\n";
                sem.post();
            });
        boost::thread sig_listener([&] { service.run(); });

        boost::this_thread::sleep_for(boost::chrono::seconds(3));

        service.post([&] { sig.cancel(); });
        sig_listener.join();
    }
    else
    {
        std::cout << "More than " << MAX_PROCESS_INSTANCES << " instances not allowed\n";
    }
}

There's a lot that could be explained there. 那里有很多可以解释的东西。 Let me know if you're interested. 如果您有兴趣,请告诉我。

NOTE It should be quite obvious that if kill -9 is used on your application (forced termination) then all bets are off and you'll have to either remove the Name Semaphore object or explicitly unlock it ( post() ). 注意很明显,如果你的应用程序使用了kill -9 (强制终止),那么所有的赌注都会关闭,你必须删除Name Semaphore对象或明确解锁它( post() )。

Here's a testrun on my system: 这是我系统上的一个测试:

sehe@desktop:/tmp$ (for a in {1..6}; do ./test& done; time wait)
More than 3 instances not allowed
More than 3 instances not allowed
More than 3 instances not allowed
Exiting with signal 0...
Exiting with signal 0...
Exiting with signal 0...

real    0m3.005s
user    0m0.013s
sys 0m0.012s

Here's a simplistic way to implement your own 'semaphore' (since I don't think the standard library or boost have one). 这是一种实现自己的“信号量”的简单方法(因为我不认为标准库或boost有一个)。 This chooses a 'cooperative' approach and workers will wait for each other: 这选择了一种“合作”的方法,工人们会互相等待:

#include <boost/thread.hpp>
#include <boost/phoenix.hpp>

using namespace boost;
using namespace boost::phoenix::arg_names;

void the_work(int id)
{
    static int running = 0;
    std::cout << "worker " << id << " entered (" << running << " running)\n";

    static mutex mx;
    static condition_variable cv;

    // synchronize here, waiting until we can begin work
    {
        unique_lock<mutex> lk(mx);
        cv.wait(lk, phoenix::cref(running) < 3);
        running += 1;
    }

    std::cout << "worker " << id << " start work\n";
    this_thread::sleep_for(chrono::seconds(2));
    std::cout << "worker " << id << " done\n";

    // signal one other worker, if waiting
    {
        lock_guard<mutex> lk(mx);
        running -= 1;
        cv.notify_one(); 
    }
}

int main()
{
    thread_group pool;

    for (int i = 0; i < 10; ++i)
        pool.create_thread(bind(the_work, i));

    pool.join_all();
}

Now, I'd say it's probably better to have a dedicated pool of n workers taking their work from a queue in turns: 现在,我想说最好有一个专门的n个工作池轮流从队列中取出他们的工作:

#include <boost/thread.hpp>
#include <boost/phoenix.hpp>
#include <boost/optional.hpp>

using namespace boost;
using namespace boost::phoenix::arg_names;

class thread_pool
{
  private:
      mutex mx;
      condition_variable cv;

      typedef function<void()> job_t;
      std::deque<job_t> _queue;

      thread_group pool;

      boost::atomic_bool shutdown;
      static void worker_thread(thread_pool& q)
      {
          while (auto job = q.dequeue())
              (*job)();
      }

  public:
      thread_pool() : shutdown(false) {
          for (unsigned i = 0; i < boost::thread::hardware_concurrency(); ++i)
              pool.create_thread(bind(worker_thread, ref(*this)));
      }

      void enqueue(job_t job) 
      {
          lock_guard<mutex> lk(mx);
          _queue.push_back(std::move(job));

          cv.notify_one();
      }

      optional<job_t> dequeue() 
      {
          unique_lock<mutex> lk(mx);
          namespace phx = boost::phoenix;

          cv.wait(lk, phx::ref(shutdown) || !phx::empty(phx::ref(_queue)));

          if (_queue.empty())
              return none;

          auto job = std::move(_queue.front());
          _queue.pop_front();

          return std::move(job);
      }

      ~thread_pool()
      {
          shutdown = true;
          {
              lock_guard<mutex> lk(mx);
              cv.notify_all();
          }

          pool.join_all();
      }
};

void the_work(int id)
{
    std::cout << "worker " << id << " entered\n";

    // no more synchronization; the pool size determines max concurrency
    std::cout << "worker " << id << " start work\n";
    this_thread::sleep_for(chrono::seconds(2));
    std::cout << "worker " << id << " done\n";
}

int main()
{
    thread_pool pool; // uses 1 thread per core

    for (int i = 0; i < 10; ++i)
        pool.enqueue(bind(the_work, i));
}

PS. PS。 You can use C++11 lambdas instead boost::phoenix there if you prefer. 如果您愿意,可以使用C ++ 11 lambdas而不是boost :: phoenix。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM