简体   繁体   中英

lock the rows until next select postgres

Is there a way in postgres to lock the rows until the next select query execution from the same system.And one more thing is there will be no update process on locked rows. scenario is something like this

If the table1 contains data like

id |    txt
-------------------
1 |    World 
2 |    Text
3 |    Crawler
4 |    Solution
5 |    Nation
6 |    Under
7 |    Padding
8 |    Settle
9 |    Begin
10 |    Large
11 |    Someone
12 |    Dance

If sys1 executes

select * from table1 order by id limit 5;

then it should lock row from id 1 to 5 for other system which are executing select statement concurrently.

Later if sys1 again execute another select query like

select * from table1 where id>10 order by id limit 5;

then pereviously locked rows should be released.

I don't think this is possible. You cannot block a read only access to a table (unless that select is done FOR UPDATE )

As far as I can tell, the only chance you have is to use the pg_advisory_lock() function.
http://www.postgresql.org/docs/current/static/functions-admin.html#FUNCTIONS-ADVISORY-LOCKS

But this requires a "manual" release of the locks obtained through it. You won't get an automatic unlocking with that.

To lock the rows you would need something like this:

select pg_advisory_lock(id), * 
from 
( 
  select * table1 order by id limit 5
) t

(Note the use of the derived table for the LIMIT part. See the manual link I posted for an explanation)

Then you need to store the retrieved IDs and later call pg_advisory_unlock() for each ID.

If each process is always releasing all IDs at once, you could simply use pg_advisory_unlock_all() instead. Then you will not need to store the retrieved IDs.

Note that this will not prevent others from reading the rows using "normal" selects. It will only work if every process that accesses that table uses the same pattern of obtaining the locks.

It looks like you really have a transaction which transcends the borders of your database, and all the change happens in an another system.

My idea is select ... for update no wait to lock the relevant rows, then offload the data into another system, then rollback to unlock the rows. No two select ... for update queries will select the same row, and the second select will fail immediately rather than wait and proceed.

But you don't seem to mark offloaded records in any way; I don't see why two non-consecutive selects won't happily select overlapping range. So I'd still update the records with a flag and/or a target user name and would only select records with the flag unset.

I tried both select...for update and pg_try_advisory_lock and managed to get near my requirement.

/*rows are locking but limit is the problem*/
select * from table1 where  pg_try_advisory_lock( id) limit 5;
.
.
$_SESSION['rows'] = $rowcount; // no of row to process
.
.
/*afer each process of word*/
$_SESSION['rows'] -=1;
.
.
/*and finally unlock locked rows*/
if ($_SESSION['rows']===0)
select pg_advisory_unlock_all() from table1

But there are two problem in this
1. As Limit will apply before lock, every time the same rows are trying to lock in different instance.
2. Not sure whether pg_advisory_unlock_all will unlock the rows locked by current instance or all the instance.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM