When you are dealing with concurrency you need to take care of a lot of challenges. One of them is maintaining a consistent state of all your shared resources.
While modern application servers and database management systems can handle most of the concurrency issues and make life much easier for developers, some of the problems are our responsibility and we need to be aware of them.
The lost update problem occurs when multiple concurrent transactions try to read and update the same data. Let us understand this with the help of an example:
In this example, we have two concurrent transactions trying to withdraw 50$ and 30$ from Bob's account. After both transitions have finished we expect to see 20$ on the account balance, but since the second transaction reads only the committed data, it is unaware of the concurrent operation and behaves as the first transaction never happened. As a result, the second transaction overwrote the first update, and our system suffered a 50$ loss. This was an example of the Lost Update problem.
Here you will find a complete example: https://github.com/giova333/spring-boot-distributed-lock
Let us design a simple banking system where we could deposit and withdraw funds from accounts. I am using Spring Boot 2.4.4 and these are dependencies needed for the tutorial:
First, let us create a simple Account model:
For data access, we need to create JpaRepository for accounts
AccountService has one single method that transfers funds from the source account to the target account.
We need to provide API so we can trigger our logic via an HTTP call.
And finally, we must add few accounts so we can use them for experiments.
You might have noticed postgresql maven dependency. You can either install PostgreSQL locally or run it inside Docker:
To reproduce a lost update described in the sequence diagram we need to call our service, however, since most concurrency problems happen due to the perfect timing it is better to introduce an artificial delay: Thread.sleep(4000) at the end of AccountService.sendMoney method before triggering API:
Now if we check the account table we will see that a lost update has been successfully reproduced:
Preventing Lost Updates
There are several ways how you can prevent lost updates. I will show you the following:
Increase transaction isolation level
Atomic write operations
Increase transaction isolation level
An advantage of this approach is that databases can perform this check efficiently in conjunction with repeatable read isolation level. PostgreSQL’s repeatable read, Oracle’s serializable, and SQL Server’s snapshot isolation levels automatically detect when a lost update has occurred and abort the offending transaction. However, MySQL/ InnoDB’s repeatable read does not detect lost updates. Automatic lost update detection is a great feature because it does not require application code to use any special database features however, if your system needs to support multiple RDBMS it might not work for you.
To use this approach you need to change the transaction isolation level either globally or Spring allows to do that by specifying:
Now if you try to make the same API invocation the second transaction will be aborted:
Optimistic Locking is also known as Conditional Update or Compare-And-Set helps to avoid lost updates by allowing an update to happen only if the value has not changed since you last read it. If the current value does not match what you previously read, the update has no effect, and the read-modify-write cycle must be retried. There are different ways how we can implement Optimistic Locking however we will be using probably the most common one. First of all, we need to add a new field version to our domain model:
And this is pretty much it, now let us check how our queries have changed because of this small change, and thanks to Hibernate(Note that we retrieve accounts by id rather than by name however for better comprehension in the diagrams we use owner name):
Since the row version has been updated by the first transaction, the second update took no effect. As result, Hibernate throws an exception and the second transaction must be retried.
Another option for preventing lost updates is to explicitly lock objects that are going to be updated. Then the application can perform a read-modify-write cycle, and if any other transaction tries to concurrently read the same object, it is forced to wait until the first read-modify-write cycle has been completed. This approach is called pessimistic locking. In our example, we will use PESSIMISTIC_WRITE mode which allows us to obtain an exclusive lock and prevent the data from being read, updated, or deleted.
Since the second transaction reads the data only after completion of the first transaction, there are no overwrites, therefore lost updates cannot occur. To use Pessimistic Locking in our service we need only to override one method in AccountRepository:
Now we can trigger our API and you will see that the state of our database is consistent and as we expected Bob has 20$ on his balance:
Atomic write operations
Many databases provide atomic update operations, which remove the need to implement read-modify-write cycles in application code. They are usually the best solution if your code can be expressed in terms of those operations. Unfortunately, object-relational mapping frameworks such as Hibernate make it easy to accidentally write code that performs unsafe read-modify-write cycles instead of using atomic operations provided by the database. Besides that, not all writes can easily be expressed in terms of atomic operations because some of them require a complex validation or arbitrary editing. Nevertheless, it might be a good solution for you.
This is how we can update our code to use atomic update operations:
Use atomic operations instead of read-modify-write cycles:
In this tutorial, we have seen various ways to prevent lost updates. In the next article https://www.up-2date.com/post/write-skew-problem-and-distributed-lock, I will show you how updates could be avoided at all by adding append-only operations and how to solve the write-skew problem.