简体   繁体   中英

Progress ABL, How to determine which end-user is locking a record and display to other end-users who is locking the record?

I realize that Progress ABL has a natural tendency to do this:

By default, Progress displays the message: in use by on . Wait or choose CANCEL to stop. (2624)

This 2624 message provides information, but it is not usually what is wanted because users do not have a chance to commit changes or proceed without the STOP condition. Then they are returned to the startup procedure.

However, I want to be able to display which is locking the record after an if ______ locked then do: display and specific record locked

I did find information in the article at the bottom of this post about using the VST _Lock, but Progress documents state this verbatim, "Note: Use caution querying the _Lock table; frequent access can consume large quantities of system resources and negatively impact performance." Is there a alternative approach or best practice to this? Any help would be greatly appreciated.

https://knowledgebase.progress.com/articles/Article/P182366

I am sympathetic to the desire to provide the user with a nice message about who is holding the lock. But the kbase is not kidding. It is a very bad idea to use _LOCK in this way.

11.4+ makes it less bad but it is still a very painful thing to do on a large production system.

It will appear to be "ok" on a small system with a default lock table size (-L 8192). On a large active system with a large lock table (values of -L north of 1M are common) and lots of locks being used you will have a very, very different and very negative experience.

A better solution might be to look at "blocked users":

for each dictdb._Connect no-lock
    where _Connect-usr <> ?
      and _Connect-wait <> " -- ":  /* there are spaces around the '--' */

  display _Connect.

end.

This will be much faster and might tell you everything you need to know.

If you are going to scan _LOCK regardless of the kbase's warning at least put some logic in your loop to track how long it is taking and bail out if it gets to be too long. Something like this might be a good start:

etime( yes ).

for each dictdb._Lock where _Lock._Lock-usr <> ? and _Lock._Lock-recid <> ?:

  if etime > 500 then leave.

  /* whatever ... */

end.

I assume you are already familiar with find ... exclusive-lock/share-lock no-wait no-error since you can't use the locked function without using no-wait.

Querying _lock might not so bad as the comment makes it sound. You will only do this occasionally when a user encounters a locking conflict. Also the performance greatly depends on the total number of entries currently locked.

The important thing is to query _lock the right way, depending on the Progress version.

To repeat my answer to a similar question :

  • _LOCK always has the full number of entries given in -L, regardless how many locks currently exist.
  • In pre-11.4 releases the fields are not indexed, but all used locks are at the beginning of the table, so you can use

     FOR EACH _Lock NO-LOCK: IF _Lock._Lock-Usr = ? THEN LEAVE. 

    (_Lock._Lock-Name = ? is also fine). See http://knowledgebase.progress.com/articles/Article/P161995 (Apparently this is not true in practice for very large lock tables or many locks. Still this is probably the best you can do since scanning the whole lock table indeed takes some time.)

  • In 11.4 and 11.5 the populated entries are no longer at the beginning so the old code will give wrong results (see http://knowledgebase.progress.com/articles/Article/000056304 , this is fixed in 11.5.1). Fortunately scanning the lock table is now much faster so you can use

     FOR EACH _Lock NO-LOCK WHERE _Lock-Recid <> ?: 

    mentioned in the same article. Technically this is not implemented with indices. (An index wouldn't work with the <> operator.)

  • From 11.5 on both variants should work, but the newer variant with a where-phrase should be faster.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM