简体   繁体   中英

PHP, mysql and concurrent updates

Hello i have a browsergame where various players schedule actions that can last for example one or 2 hours. A cronjob every minutes checks all the actions that are finished (endtime <= unixtime() ) and completes them (giving rewards to the related player etc.);

It happened recently that there were - say - 100 actions to complete and the cronjob task 1 didn't finish in a minute, so the cronjob task 2 fired and the result was that all the actions where completed two times.

How can i avoid this to happen again? I must use a specific transaction isolation code for the session and reserve the rows for update?

PHP 5.3.18 mysql is 5.5.27 Table Engine is INNODB

The current code called every minute is this:

public function complete_expired_actions ( $charflag = false )
{       
    // Verifying if there are actions to complete...

    $db = Database::instance();
    $db -> query("set autocommit = 0");
    $db -> query("begin");

    $sql = "select * from 
        character_actions
        where status = 'running' 
        and endtime <= unix_timestamp()"; 

    $result = $db -> query ( $sql ) ;               

    // try-catch. Se si verifica un errore, l' azione che commette l' errore viene rollbackata

    foreach ( $result as $row )
    {
        try 
        {

            $o = $this->factory( $row -> action );
            $o -> complete_action ( $row );


            if ($row -> cycle_flag == FALSE)
            { 
                // non aperta ad attacchi SQl-injection perchè i parametri non sono passati via request
            if ( $charflag == true )
                    kohana::log( 'info', "-> Completing action: " . $row -> id . ' - ' . $row -> action . " for char: " . $row->character_id );

                $db->query( "update character_actions set status = 'completed' where id = " . $row->id ); 
                // in ogni caso invalida la sessione!
                Cache_Model::invalidate( $row -> character_id );                                                                    
            }

            $db->query('commit');

            } catch (Kohana_Database_Exception $e)
            {
                kohana::log('error', kohana::debug( $e->getMessage() ));
                kohana::log('error', 'An error occurred, rollbacking action:' .  $row->action . '-' . $row->character_id );
                $db->query("rollback");         
            }   

    }       

    $db->query("set autocommit = 1");

In this cases I use a new column eg: mark to the table character_actions

then at the start of the job I call this query:

$uniqueid = time();
$sql="update character_actions set mark = '$uniqueid' where status = 'running' and endtime <= unix_timestamp() AND mark is NULL "; 

then you code can be

$sql = "select * from 
        character_actions
        where status = 'running' 
        and mark = '$uniqueid'"; 

    $result = $db -> query ( $sql ) ;   

This approach has a limit, it's start different parallel works, that slow the machine that induce more delay and more parallel works...

Can be solved introducing a limit:

$lim= 100 ; // tune to finish the job in 60 seconds
$sql="update character_actions set mark = '$uniqueid' where status = 'running' and endtime <= unix_timestamp() AND mark is NULL limit $lim  "; 

Of course induce a delay in points attribution.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM