简体   繁体   English

为什么我的 Laravel 队列作业在 60 秒后失败?

[英]Why are my Laravel Queue Jobs failing after 60 seconds?

The Situation情况

I'm using Laravel Queues to process large numbers of media files, an individual job is expected to take minutes (lets just say up to an hour).我正在使用 Laravel Queues 处理大量媒体文件,预计单个作业需要几分钟(可以说最多一个小时)。

I am using Supervisor to run my queue, and I am running 20 processes at a time.我正在使用 Supervisor 来运行我的队列,并且我一次运行 20 个进程。 My supervisor config file looks like this:我的主管配置文件如下所示:

[program:duplitron-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /var/www/duplitron/artisan queue:listen database --timeout=0 --memory=500 --tries=1
autostart=true
autorestart=true
user=duplitron
numprocs=20
redirect_stderr=true
stdout_logfile=/var/www/duplitron/storage/logs/duplitron-worker.log

There are a few oddities that I don't know how to explain or correct:有一些奇怪的地方我不知道如何解释或更正:

  1. My jobs fairly consistently fail after running for 60 to 65 seconds.我的作业在运行 60 到 65 秒后总是失败。
  2. After being marked as failed the job continues to run even after being marked as failed.在被标记为失败后,即使被标记为失败,该作业仍继续运行 Eventually they do end up resolving successfully.最终他们确实最终成功解决。
  3. When I run the failed task in isolation to find the cause of the issue it succeeds just fine.当我单独运行失败的任务以找到问题的原因时,它会成功。

I strongly believe this is a timeout issue;我坚信这是一个超时问题; however, I was under the impression that --timeout=0 would result in an unlimited timeout.然而,我的印象是--timeout=0会导致无限超时。

The Question问题

How can I prevent this temporary "failure" job state?如何防止这种暂时的“失败”工作状态? Are there other places where a queue timeout might be invoked that I'm not aware of?是否还有其他我不知道的可能会调用队列超时的地方?

The Situation情况

I'm using Laravel Queues to process large numbers of media files, an individual job is expected to take minutes (lets just say up to an hour).我正在使用 Laravel 队列处理大量媒体文件,预计单个作业需要几分钟(可以说最多一个小时)。

I am using Supervisor to run my queue, and I am running 20 processes at a time.我正在使用 Supervisor 来运行我的队列,并且我一次运行 20 个进程。 My supervisor config file looks like this:我的主管配置文件如下所示:

[program:duplitron-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /var/www/duplitron/artisan queue:listen database --timeout=0 --memory=500 --tries=1
autostart=true
autorestart=true
user=duplitron
numprocs=20
redirect_stderr=true
stdout_logfile=/var/www/duplitron/storage/logs/duplitron-worker.log

There are a few oddities that I don't know how to explain or correct:有一些奇怪的地方我不知道如何解释或更正:

  1. My jobs fairly consistently fail after running for 60 to 65 seconds.我的作业在运行 60 到 65 秒后总是失败。
  2. After being marked as failed the job continues to run even after being marked as failed.在被标记为失败后,即使被标记为失败,作业仍继续运行 Eventually they do end up resolving successfully.最终他们确实最终成功解决。
  3. When I run the failed task in isolation to find the cause of the issue it succeeds just fine.当我单独运行失败的任务以找到问题的原因时,它会成功。

I strongly believe this is a timeout issue;我坚信这是一个超时问题; however, I was under the impression that --timeout=0 would result in an unlimited timeout.然而,我的印象是--timeout=0会导致无限超时。

The Question问题

How can I prevent this temporary "failure" job state?如何防止这种暂时的“失败”工作状态? Are there other places where a queue timeout might be invoked that I'm not aware of?是否还有其他我不知道的可能会调用队列超时的地方?

The Situation情况

I'm using Laravel Queues to process large numbers of media files, an individual job is expected to take minutes (lets just say up to an hour).我正在使用 Laravel 队列处理大量媒体文件,预计单个作业需要几分钟(可以说最多一个小时)。

I am using Supervisor to run my queue, and I am running 20 processes at a time.我正在使用 Supervisor 来运行我的队列,并且我一次运行 20 个进程。 My supervisor config file looks like this:我的主管配置文件如下所示:

[program:duplitron-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /var/www/duplitron/artisan queue:listen database --timeout=0 --memory=500 --tries=1
autostart=true
autorestart=true
user=duplitron
numprocs=20
redirect_stderr=true
stdout_logfile=/var/www/duplitron/storage/logs/duplitron-worker.log

There are a few oddities that I don't know how to explain or correct:有一些奇怪的地方我不知道如何解释或更正:

  1. My jobs fairly consistently fail after running for 60 to 65 seconds.我的作业在运行 60 到 65 秒后总是失败。
  2. After being marked as failed the job continues to run even after being marked as failed.在被标记为失败后,即使被标记为失败,作业仍继续运行 Eventually they do end up resolving successfully.最终他们确实最终成功解决。
  3. When I run the failed task in isolation to find the cause of the issue it succeeds just fine.当我单独运行失败的任务以找到问题的原因时,它会成功。

I strongly believe this is a timeout issue;我坚信这是一个超时问题; however, I was under the impression that --timeout=0 would result in an unlimited timeout.然而,我的印象是--timeout=0会导致无限超时。

The Question问题

How can I prevent this temporary "failure" job state?如何防止这种暂时的“失败”工作状态? Are there other places where a queue timeout might be invoked that I'm not aware of?是否还有其他我不知道的可能会调用队列超时的地方?

The Situation情况

I'm using Laravel Queues to process large numbers of media files, an individual job is expected to take minutes (lets just say up to an hour).我正在使用 Laravel 队列处理大量媒体文件,预计单个作业需要几分钟(可以说最多一个小时)。

I am using Supervisor to run my queue, and I am running 20 processes at a time.我正在使用 Supervisor 来运行我的队列,并且我一次运行 20 个进程。 My supervisor config file looks like this:我的主管配置文件如下所示:

[program:duplitron-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /var/www/duplitron/artisan queue:listen database --timeout=0 --memory=500 --tries=1
autostart=true
autorestart=true
user=duplitron
numprocs=20
redirect_stderr=true
stdout_logfile=/var/www/duplitron/storage/logs/duplitron-worker.log

There are a few oddities that I don't know how to explain or correct:有一些奇怪的地方我不知道如何解释或更正:

  1. My jobs fairly consistently fail after running for 60 to 65 seconds.我的作业在运行 60 到 65 秒后总是失败。
  2. After being marked as failed the job continues to run even after being marked as failed.在被标记为失败后,即使被标记为失败,作业仍继续运行 Eventually they do end up resolving successfully.最终他们确实最终成功解决。
  3. When I run the failed task in isolation to find the cause of the issue it succeeds just fine.当我单独运行失败的任务以找到问题的原因时,它会成功。

I strongly believe this is a timeout issue;我坚信这是一个超时问题; however, I was under the impression that --timeout=0 would result in an unlimited timeout.然而,我的印象是--timeout=0会导致无限超时。

The Question问题

How can I prevent this temporary "failure" job state?如何防止这种暂时的“失败”工作状态? Are there other places where a queue timeout might be invoked that I'm not aware of?是否还有其他我不知道的可能会调用队列超时的地方?

In my case, I am using Symfony\\Component\\Process\\Process .就我而言,我使用的是Symfony\\Component\\Process\\Process I have to set timeout as following as well.我也必须将超时设置如下。

$process = new Process([...]);
$process->setTimeout(null);

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM