简体   繁体   中英

How does transaction really work - simple use case (sping boot)?

I'm trying to understand whether I'm doing the right thing with transactions. I have a small REST API build on Spring Boot, using PostgreSQL.

The case is a "reservation" - incoming request should find some entity and set its status to "reserved". What must be prevented is that two requests return the same entity.

Currently I'm wrapping the whole endpoint handler in a transaction (below). I understand that the system will basically make a snapshot of current state, then the first request will modify the table.

The question is, when the second request comes in, during first is still within the transaction, what will happen? I need that the find() query will wait until fist transaction is over and then proceeds. Will it work like this, at least in theory?

@Transactional
@RequestMapping(value = "/newTour", method = RequestMethod.GET, headers = "Accept=application/xml",
        consumes = "application/xml", produces = "application/xml")
public @ResponseBody ResponseEntity<?> addTourReservation(@RequestBody PartialTourUpdate partialUpdate) {


    try{ 
            List<Tour> tours = tourRepo.findFirstPessimisticByTourTypeInAndStatusOrderByPriorityDesc(partialUpdate.getTourType(), Tour.STATUS_OPEN);
            if (tours != null && tours.size() > 0) {
                Tour tour = tours.get(0);
                tour.setReservationID(partialUpdate.getReservationID());
                tour.setStatus(Tour.STATUS_TO_RESERVE);
                tourRepo.save(tour);
                orderRepo.updateReservationStatus(true, tour.getTourID()); 
                return new ResponseEntity<Tour>(tour, HttpStatus.CREATED);
            } else {
                rM.setValue(ResultMessage.ErrorCode.LOS_NOT_FOUND);
                rM.log();
                return new ResponseEntity<ResultMessage>(rM, HttpStatus.OK);
            }


    } catch (Exception e) 
    {
        rM.setValue(ResultMessage.ErrorCode.LOS_UNKNOWN);
        rM.log();
        return new ResponseEntity<ResultMessage>(rM, HttpStatus.OK);
    }

Locking a row for update preventing concurrent transaction to read it imply an exclusive lock.

Using JPA this is achieved using PESSIMISTIC_WRITE lock

You need to annotate your repository method with

 @Lock(LockModeType.PESSIMISTIC_WRITE)

Beware that this will span a lock on whole tour table preventing any concurrent transaction to read any row which can imply threads contention problem under heavy load.

An alternative approach can be to select all available tours and to reserve a randomly chosen one in the list having beforehand locked it (and only it, not the whole table) using entityManager.lock(tour, LockModeType.PESSIMISTIC_FORCE_INCREMENT) (entity must have an @Version attribute), if the update trigger an exception (if another transaction already reserved it) just choose an another one and try to update it.

However the best approach remain to let the database handle the concurrency problem and to reserve the 'tour' using a single SQL (or HQL) update query (there is no business logic in your method so you don't need to retrieve and manipulate the entity before updating it).

Multiple transactions may have following disadvantages :

Case 1.If T1 transaction reads data from table A1 that was written by another concurrent transaction T2.If on the way T2 is rollback,the data obtained by T1 is invalid one.Eg a=2 is original data .If T1 read a=1 that was written by T2.If T2 rollback then a=1 will be rollback to a=2 in DB.But,Now ,T1 has a=1 but in DB table it is changed to a=2.

Case 2.If T1 transaction reads data from table A1.If another concurrent transaction(T2) update data on table A1.Then the data that T1 has read is different from table A1.Because T2 has updated the data on table A1.Eg if T1 read a=1 and T2 updated a=2.Then a!=b.

Case 3.If T1 transaction reads data from table A1 with certain number of rows. If another concurrent transaction(T2) inserts more rows on table A1.The number of rows read by T1 is different from rows on table A1

Case 1 is called Dirty reads.

Case 2 is called Non-repeatable reads.

Case 3 is called Phantom reads.

So, isolation level is the extend to which Scenario 1, Scenario 2, Scenario 3 can be prevented. You can obtain complete isolation level by implementing locking.That is preventing concurrent reads and writes to the same data from occurring.But it affects performance .The level of isolation depends upon application to application how much isolation is required.

ISOLATION_READ_UNCOMMITTED :Allows to read changes that haven't yet been committed.It suffer from Case 1, Case 2, Case 3

ISOLATION_READ_COMMITTED:Allows reads from concurrent transactions that have been committed. It may suffer from Case 2 and Case 3. Because other transactions may be updating the data.

ISOLATION_REPEATABLE_READ:Multiple reads of the same field will yield the same results untill it is changed by itself.It may suffer from Case 3.Because other transactions may be inserting the data

ISOLATION_SERIALIZABLE: Case 1,Case 2,Case 3 never happens.It is complete isolation.It involves full locking.It affets performace because of locking.

Hope this helps !! have a good day

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM