简体   繁体   中英

Laravel - Schema - MySql - Temp Table “Already Exists”

I am using Laravel 5.4 for a web app, RabbitMQ for a message queue layer, and the Laravel queue worker. I have two related issues:

Temporary Tables

I have the following table-creation code in my constructor:

Schema::create('tmp_products', function (Blueprint $table) {
            $table->temporary();
            $table->integer('id');
            $table->string('alias',     255);
            $table->string('include',   255)->nullable();
            $table->string('exclude',   255)->nullable();
        });

Note the use of

$table->temporary();

When multiple instances of this process run concurrently, I get the following error:

PDOException: SQLSTATE[42S01]: Base table or view already exists: 1050 Table 'tmp_products' already exists in /var/www/myproject/vendor/doctrine/dbal/lib/Doctrine/DBAL/Driver/PDOStatement.php:91

At first I thought the table may not be temporary, however I don't see the table in MySQL Workbench, so it's unlikely.

Maybe the multiple processes appear to be sharing connection state (as temp tables are session-specific). The code is run as a Laravel php artisan queue:worker command, managed by supervisord (with numprocs=3 ), and I can see in htop that there are three processes with unique PIDs, so I don't understand how they could be sharing connection state.

Queue - Failed Jobs

What's more interesting is that I run the queue worker with the flag --tries=0 (ie do not retry processing messages), so after the above exception is thrown within the job->handle() method the message should immediately be transferred to the Laravel failed_jobs table, but what I see is an infinite loop of exceptions and the message never leaves the queue.

So I guess my questions are:

  1. How can queue:worker processes share db-connection state
  2. Why does this particular scenario stop messages failing, whereas they do fail as expected if I explicitly throw new Exception(); in my handle() function

Any help is appreciated.

Thanks,

EDIT: I figured out why the failing jobs were not entering the failed_jobs table. Setting --tries=0 to zero appears to make the jobs try forever. Setting it to 1 fixed it.

UPDATE: The same error occurs when using raw PDO:

$pdo = DB::connection()->getPdo();
$pdo->exec("CREATE TABLE tmp_products (id INT NOT NULL, alias VARCHAR(255) NOT NULL, include VARCHAR(255) NULL, exclude VARCHAR(255) NULL, PRIMARY KEY (id));");

ok I figured it out. I totally misunderstood how the Laravel workers operate. In the back of my head I assumed they work like apache - spawning a worker for each job in the background - and destroying each worker when finished. This is not the case. Each worker listens to the queue, waits, and processes jobs serially. It does not recreate a DB session on every job.

Incidentally, this means that the same error can affect a single worker, on the second job that it processes.

So the temp table from my previous job still existed on the next job. Now I just check, if table exists don't recreate.

Thanks

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM