What Is Eventual Consistency?
Eventual consistency is a programming model in which programmers make the assumption that over a long enough period of time and without changes to a system, the current version of a particular program will eventually distribute until every replica of the program is consistent. The concept of eventual consistency is used in programming methods such as optimistic replication, distributed shared memory, and distributed transactions. Regarding databases, eventual consistency is attained through a three-step process. First, the distributed information is made available on the system; this is followed by a soft state, in which different users may still be working with different versions of the data; and finally consistency is achieved, and all computers have access to identical data.
One of the most visible ways eventual consistency is applied has to do with software updates distributed online. For the first few seconds after an update is released, no one will have it; not enough time has passed for users of the software to download and install the update. This is the "available" state; the update exists, but has yet to be distributed. Over time, as users download the update, some will have it and some will not. After enough time has passed, though, everyone who uses the software will have updated to the latest version. This is the premise behind the state of eventual consistency: given enough time, any update will fully propagate throughout the system.
As the system works towards eventual consistency, conflicts are inevitable. These occur when the program version or information currently on the computer fails to match the "model version" of the program. Programs are usually set up to recognize such conflicts and manage them. When the files on a specific computer are older than the latest model version of the software or data in question, the system will usually prompt the user to initiate an update to resolve the disparity.
Three possible methods are available to effect these resolutions: write repair, read repair, and asynchronous repair. These methods all bring the version of the program or data in line with the consistent model. The key difference among these has to do with the way in which the system times the repair operation. All such operations have benefits and drawbacks.
In a write repair, changes to the code stored on the computer are made during a write operation, when the computer is already writing something to the system's hard drive. This repairs the inconsistency, bringing the program or data in line with the model, but it also temporarily slows the original write operation. For a read repair, the corrective operation occurs during a read cycle from the hard drive. This, in turn, slows the read operation. In asynchronous repair, the repair takes place when neither a read nor write operation occurs, leading to the consumption of idle cycles on the CPU.
Discuss this Article
Post your comments