Internet
Fact-checked

At EasyTechJunkie, we're committed to delivering accurate, trustworthy information. Our expert-authored content is rigorously fact-checked and sourced from credible authorities. Discover how we uphold the highest standards in providing you with reliable knowledge.

Learn more...

What Is Concurrency Control?

Jean Marie Asta
Jean Marie Asta

In data management programming, concurrency control is a mechanism designed to ensure that accurate results are generated by concurrent operations. Those results must also be obtained in a timely manner. Concurrency control is very often seen in databases where there is a cache of searchable information for users to obtain.

Woman doing a handstand with a computer
Woman doing a handstand with a computer

Programmers try to design a database in such a way that important transactions’ effect on shared data will be serially equivalent. What this means is that data which makes contact with sets of transactions would be in a certain state where the results are obtainable if all transactions execute serially and in a particular order. Sometimes that data is invalid as a result of it being modified by two transactions, concurrently.

There are multiple ways of ensuring that transactions execute one after another, including the use of mutual exclusion as well as creating a resource that decides which transactions have access. This is overkill, however, and will not allow a programmer to benefit from concurrency control in a distributed system. Concurrency control allows the simultaneous execution of multiple transactions while keeping these transactions away from each other, ensuring linearizability. One way to implement concurrency control is the use of an exclusive lock on a particular resource for serial transaction executions which share resources. Transactions will lock an object intended to be used, and if some other transaction makes a request for the object that is locked, that transaction has to wait for the object to unlock.

Implementation of this method in distributed systems involves lock managers — servers that issue resource locks. This is very similar to servers for centralized mutual exclusions, where clients may request locks and send messages for the release of locks on a particular resource. Preservation of serial execution, however, is still necessary for concurrency control. If two separate transactions access a similar object set, the results need to be similar and as if these transactions were executed in a particular order. To ensure order on access to a resource, two-phase locking is introduced, meaning that transactions are not allowed new locks upon the release of a separate lock.

In two-phase locking for concurrency control, its initial phase is deemed the growing phase, where the transaction acquires its needed lock. The next phase is deemed a shrinking phase, in which the transaction has its locks released. There are problems with this type of locking. If transactions abort, other transactions might use data from objects modified and unlocked by aborted transactions. This would result in other transactions being aborted.

Discuss this Article

Post your comments
Login:
Forgot password?
Register:
    • Woman doing a handstand with a computer
      Woman doing a handstand with a computer