The consistency model is an abstraction of the contract between the distributed system and developers. It is defined from the two perspectives, adapted from Linearizability versus Serializability:
- Serializability guarantees concurrent transactions on multiple objects with some serial execution. It represents the I in ACID.
- Linearizability defines the recent guarantee on the behavior of single operation on the single object. It represents the C in CAP Theorem.
transaction isolation is primarily about avoiding race conditions due to concurrently executing transactions, whereas distributed consistency is mostly about coordinating the state of replicas in the face of delays sand faults
— Designing Data-Intensive Applications, Chapter 9. Consistency and Consensus
The below diagram is copied from the jepsen.io to guide our exploration of consistency models in the Designing Data-Intensive Applications, Chapter 7. Transactions and Chapter 9. Consistency and Consensus.
Assuming alice has two accounts, saving and checking, each deposited $500, with $1000 in total. We MAY encounter the following problems when dealing with concurrent transactions.
The dirty write defined as that the transaction A overwrite transaction B’s uncommitted write. This is literally data loss, and it is not acceptable in any circumstance, even for the least restrictive Read Uncommitted.
If the transaction A can read transaction B’s uncommitted write, we call it dirty read. It is also bad as the partial update is leaking outside the transaction, and it MAY not represent the final state if the transaction is aborted.
Considering Alice transfers $100 from checking account to the saving account, and checks the total account balancing during the transaction. Alice MAY see the balance as $900, $1000, or $1100 depending on the order of operations and timing.
Read Committed model prevents the dirty read.
Let’s say Alice deposit $100 and $50 to the checking account, and the bank takes the read-modify-write approach to update the account. Alice MAY see her checking account with $550, $600, or $650.
The root cause of this problem is the
read-compare-write is NOT atomic.
In the multi-thread programming, we can use
In the SQL world, we use
SELECT FOR UPDATE to lock the selected rows;
or enforce Cursor Stability isolation.
Read skew, aka unrepeatable read
If Alice begins another transaction to check the checking account during the transfer in the above transfer example, she MAY see $500 and $400 respectively. This timing anomaly, aka skew, is generally acceptable in the Read Committed, but can be prevented in the Repeatable Read isolation level.
With Repeatable Read isolation, Alice open a retirement account with $1000 deposited, and in another transaction, Alice query the sum all of her accounts. She MAY see $1000 and $2000 respectively. This anomaly is caused by the new rows added by another transaction, aka phantom read. This can be prevented by the Snapshot Isolation, which is quite essential for backup and analytics.
However, the Snapshot Isolation does not enforce a total order of transactions, the write skew MAY happen:
Considering the bank requires the combined balance of checking and saving accounts MUST exceeds $500 to avoid service fees, and Alice opts-in the overdraw protection; then Alice withdraw $300 each from checking and saving account concurrently. Each transaction reads the overlapped data set, aka checking account and saving account, then concurrently make disjoint updates, and concurrently commit. Everything works, except Alice’s balance drops to $400, and she has to pay the fees.
The Serializable isolation level enforce the transactions appear to have occurred in some total order to prevent the write skew.
The write skew can also be coped with row locks in the application side. Before any
deposit, we issue the
SELECT FOR UPDATE to lock all Alice’s accounts. Another
approach is materialized conflict: if we manage to create a materialized view
to store the sum of balances of checking and saving accounts, and enforce the
>= $500 constraint, the second write will trigger integration error.
Due to the physical limit of light speed, the replication MAY lag. Thus two consecutive operations MAY interact with different replicas, which introduce weird anomaly. The following consistency models are defined to enforce the real-time guarantee.
Assume Alice and Bob open a joint account with initial balance $500.
Read your writes
Alice deposits $100, she MUST see $600 immediately instead of $500.
Alice deposits $100, $200, Bob MUST see the deposit in the same order.
Alice deposits $100. Bob check the balance twice, he MAY see the balance as:
- $500, $500.
- $500, $600.
- $600, $600.
He MUST NOT see $600, $500.
Pipeline Random Access Memory, aka PRAM
Alice deposit $100, $200, Bob deposits $300, $400 simultaneously, they MUST see the deposits in the topological order, such as $100 precedes $200, and $300 precedes $400.
Writes Follow Reads
Alice deposits $100, Bob MUST observes the initial deposit $500, and the current balance $600.
you can’t change that read’s past.
It is also known session causality.
To encourage Alice to save more, Bob and his brother Chris agree to match any deposit made by Alice. Alice then deposit $100, Bob, then Chris deposit $100 each. All parties MUST see Alice’s deposit precedes Bob’s and Chris’, but the order MAY differ on each observer.
The operations causally-related MUST appear in the same order on all processes.
The sequential consistency guarantees that the operations to take place in a determined total order observed by all parties. It does not imply the real-time constraints though.
Using the above example, all parties MUST observe the same deposit order, either
Alice, Bob, Chris
Alice, Chris, Bob
Linearizability implies all operation to take place with the real-time ordering.
With above example, all parties MUST observe the same deposit order as
Alice, Bob, Chris.
The strict serializability implies both Linearizability and Serializability.
We will discuss the usage and implementations another notes. Here are some research literatures regarding the topic:
- Weak Consistency: A Generalized Theory and Optimistic Implementations for Distributed Transactions by Atul Adya, 1999
- A Critique of ANSI SQL Isolation Levels by Jim Gray et al., 1995
- Session Guarantees for Weakly Consistent Replicated Data
- Consistency in Non-Transactional Distributed Storage Systems