CSC 2/458: Parallel and Distributed Systems 11 February 2026 Concurrency Theory -- safety & liveness for concurrent data structures Safety: bad things never happen Liveness: good things eventually happen ---------------------------------------- Sequential and concurrent _histories_ Consider a program whose threads interact only through concurrent "objects" (data structures). We can model execution as an (abstract, program-level) sequence of (side effect free) method calls and returns -- a _history_. The language implementation maps this history (generally, _expands_ it) into a concrete, machine-level history (sequence of instructions). - a _thread history_ is _sequential_: at the abstract level, each call is immediately followed by its return. - operations (method executions) happen one at a time. - a _concurrent history_ is the interleaving of thread histories - absent mutual exclusion, operations may interleave and overlap - some concurrent histories are sequential but most are not - histories are _equivalent_ if they comprise the same calls and returns (with the same arguments and return values) -- and, of course, they satisfy program/language semantics. Example: T1 Ca------Ra Cb--------------Rb T2 | Cc--+---+---Rc Cd--Rd | Ce--Re | | | | | | | | | | ... v v v v v v v v v v H Ca Cc Ra Cb Rc Cd Rd Rb Ce Re If we use a single global lock to protect any operation on any concurrent structure, we trivially get a sequential combined history. If we use separate locks for every structure, we get a sequential history for each structure. We might get a sequential history for the program as a whole, but we might get deadlock. With any sort of coarse-grain locking, our critical sections (inside the operations) will take turns, which won't give us any parallelism. If we use fine-grain locks or a nonblocking algorithm, we may get a concrete history that interleaves in highly nontrial ways. Can we hope to know that it will be _equivalent_ to a sequential history -- i.e. one of the following? A B C D E A C B D E A C D B E C A B D E C A D B E How do we know these are the only allowable equivalent sequential orderings? How might we ensure we get one of them? ---------------------------------------- Safety Every structure has sequential semantics -- sets of allowable sequential histories. In the history of a queue, for example, the nth dequeue is successful iff there have been at least n previous enqueues. If so, it returns the value enqueued by the nth. For a concurrent queue, we may want to redefine dequeue to _wait_ (condition sync) when there haven't been enough previous enqueues. But as soon as we allow operations to wait (or use locks), we want to have deadlock freedom. recall the Dining Philosophers necessary conditions for deadlock: exclusive use - threads require access to some sort of non-sharable “resources” hold and wait - threads wait for unavailable resources while continuing to hold resources they have already acquired irrevocability - resources cannot forcibly be taken from threads that hold them circularity - there exists a circular chain of threads in which each is holding a resource needed by the next How might we avoid or recover from deadlock? exclusive use go nonblocking, or (for readers) use RW locks, seqlocks, or RCU hold & wait request all resources at once or use the _Banker's algorithm_ to request a resource only when there is a _feasible path_ to global completion, given (known) bounds on worst-case per-process needs irrevocability recover via some sort of back-out-and-retry mechamism circularity prevent with static acquisition order ---------------------------------------- Atomicity ensures sequential semantics operation-level sequential consistency object O is SC if all ops on O appear to happen in some global total order consistent with program order in each thread. problem: lack of multi-object history _composability_ replicated integer example (SC but not composable): void put(int v): int get(): L.acquire() return A[self] for i in T A[i] := v L.release() (This implementation might actually make sense on an NCC-NUMA machine with infrequent writes.) Puts are totally ordered by lock. Gets are totally ordered wrt the puts. But gets can happen in the middle of puts, so operations on _separate_ replicated integers may be seen in different orders by different threads, even if the underlying memory is SC. Example: local values of shared objects T1 T2 T3 T4 X Y X Y X Y X Y initially 0 0 0 0 0 0 0 0 T1: X.put(1) begins 1 0 1 0 0 0 0 0 T2: Y.put(1) begins 1 1 1 1 0 1 0 0 T3: X.get() returns 0 1 1 1 1 0 1 0 0 T3: Y.get() returns 1 1 1 1 1 0 1 0 0 T1: X.put(1) finishes 1 1 1 1 1 1 1 0 T4: X.get() returns 1 1 1 1 1 1 1 1 0 T4: Y.get() returns 0 1 1 1 1 1 1 1 0 T2: Y.put(1) finishes 1 1 1 1 1 1 1 1 SW write buffer (also SC but not composable): Imagine a shared location to which writes are buffered and occur in the background. Each thread buffers its writes to this location so it sees its own writes. If we think of the location as an abstract ojbect (what theoreticians confusingly call a register), it's high-level SC. But as soon as we have two of them we can build the butterfly circularity example. So why is SC ok for memory but not high-level objects? Because we have a _single_ memory in our machine, and we're happy if all of its operations seem to happen sequentially. But we have multiple concurrent objects, and we aren't happy if their individually-apparently-sequential operations sequentialize in mutually incompatible ways. linearizability A history H of object O is linearizable if it is equivalent to some sequential history S such that (a) S reflects sequential semantics and (b) S reflects "real time" order: if operation A in H returns before operation B is called, then A precedes B in S. That forces consistency not only w/ program order in each thread but also with any other observable order. The replicated integer history above is not linearizable because T3's Y.get() sees 1 and T4's Y.get() sees 0, and the former returns before the latter is called. The software write buffer is similarly un-linearizable because a write can return before some other thread calls a read that fails to see the write. Equivalently, a history H of object O is linearizable if every operation appears to happen instantaneously at some point between its call and return (and the order of the "instants" reflects sequential semantics). "instantaneously" precludes the replicated integer example: updates to the copies for threads A and B become visible at different times "between call and return" precludes the SW write buffer example: it returns when the op hasn't "happened" yet An object implementation is linearizable if all of its realizable histories are linearizable. _Hand-over-hand_ locking in a sorted linked list is an example of nontrivial linearizability in a fine-grain locking algorithm. We typically reason about _linearization points_. Often these are occur at a small number of statically identifiable instructions in the code, and we know the moment we execute such an instruction whether we have linearized or not. Example: the Treiber stack linearize at CAS for a successful push or pop; at read of TOS for an unsuccessful pop Everything before the linearization point is harmless prep. Everything after is doable-by-anybody or arbitrarily-postponable cleanup. A slightly more interesting example: single-producer, single-consumer queue of Fig. 3.3 in Herlihy, Shavit, Luchangco, and Spear: // assume sequential consistency for simplicity // assume ints are 64 bits wide and never overflow // (We can modify the algorithm to tolerate overflow if nec.) int head = 0, tail = 0 T items[length] exception full, empty void enqueue(T x) if (tail - head == length) throw full items[tail % length] = x tail++ T dequeue() if (tail - head == 0) throw empty T x = items[head % length] head++ return x Only producer modifies tail; only consumer modifies head. Both update items _before_ doing so. Invariants: 0 <= tail - head <= length data (if any) that have been produced but not consumed occupy items[head % length] .. items[tail-1 % length] To prove this algorithm is correct and nonblocking, we have to verify the invariants after each individual memory write Where does this code linearize? Depending on whether ops are successful, either (1) at the increments or (2) at the read of tail & head -- whichever one is written by the _other_ thread. Sometimes a linearization point can't be identified until later in an operation -- e.g., "Ah, now that I see v > k, I know that I linearized when I read p == q.next earlier." A good example of this is unsuccessful searches in the nonblocking list-based set of Harris and Michael, which we'll consider in detail later in the semester. Roughly, if I'm scanning down a linked list, looking for k, and I see a value v that is greater than k, I know I linearized on the load of the /next/ pointer in the predecessor of the node containing v. Messing with that pointer is how somebody else could insert k. Remarkably, sometimes we need to reason retrospectively over past history -- we may not know where op A linearized until some other thread sees something in op B. This is ok if we can prove it's always possible. Example: shared counter with inc() and read() operations, implemented as an array. Inc() adds one to my slot; read() scans all slots and returns their sum: atomic C[T] := {0, ...} int read() void inc() int rtn := 0 int new := C[self].load + 1 for i in T C[self].store(new) rtn := C[i].load return rtn The smallest value read() can return is the sum at the time it's called. The largest value if can return is the sum at the time of the return. Because inc() always add just one, every value in that range will be valid at some point between call and return, and any sum we get will be in that range. Note, though, that we can get a scenario in which a thread "sees" the "wrong" per-thread in the course of computing that valid total: initially slot[i] == 0 for all i C starts read() C sees slot[A] = 0 A calls inc(), does its work, and returns B calls inc(), does its work, and returns C sees slot[B] = 1 C returns a value of 1 Here A's inc() has to linearize before B's because A's op returned before B called its op. C linearizes after A and before B, despite the fact that it saw a 0 in A's slot and a 1 in B's slot. An arguably more practical example: Izraelevitz & Scott generic dual container. (Maybe we'll see this later; maybe not.) Lots more on nonblocking objects later in the semester. Unlike sequential consistency, linearizability composes, in the sense that if all histories of programs using object A are linearizable and all histories of programs using object B are linearizable, then all histories of programs using _both_ A and B are linearizable. We sometimes say that linearizability is a _local_ property: knowing it applies to objects individually gives us all we need to know about using them together. serializability While it's nice to be able to safely compose histories of independent objects, sometimes we want to compose _operations_ into larger atomic operations. (This is a different sort of composition.) Linearizability doesn't let us do this. Serializability does, but at the cost of not being able to safely compose histories anymore: everything we care about has to be part of "one big managed system." Databases typically do this. Transactions (composite operations) are said to serialize if they appear to happen in some global total order that respects program order in each thread. Note that this is allowed to be inconsistent with other observable orders. time --> T1 T2 ... ok if T2 serializes before T1 If we don't like that, we can insist (and pay for) _strict_ serializability, which requires "real time" order (appear to happen between start_txn and end_txn), much as linearizability does. A global lock can clearly be used to achieve (strict) serializability, but w/out any concurrency. Other strategies are possible. In general, they have to be prepared to back out and retry to recover from deadlock, because basic design goals imply exclusive use, hold-and-wait, and possible circularity. One popular strategy is _two-phase locking_ (2PL): every object has a lock acquire all the locks you need before releasing any of them if you get stuck (detect circularity, or simply wait too long and lose hope), back out and retry (that's _speculation_) 2PL guarantees strictness. Some other (generally faster) implementations of serializability don't. Difference between high-level SC and serializability: SC applies to a static set of operations on individual objects; serializability creates composite operations (transactions) on multiple objects. quiescent consistency operations appear to happen in some total order; _nonoverlapping_ operations appear to occur in real-time order. Applies to individual objects -- no composites. Operations not separated by quiescence may not occur in program order. E.g., I enqueue x and then y; your dequeue operation overlaps both enqueues, and you come out with y. SC L S SS QC equiv. to a seq. order + + + + + respects program order + + + + - consistent w/ real time - + - + * op can touch multiple objects - - + + - So SS dominates the other 4 in the sense that any history that is SS is also QC, SC, L, and S. Similarly, L dominates QC and SC, and S > SC. But the stronger properties can get in the way of composability. local: histories compose - + - - + Note that when we use sequential consistency at the level of the hardware memory model, it does, effectively, respect real time, because it's being applied to the whole system -- there's nothing "external" we can use to "see" reorderings. But if you're a computer architect building memory, making that memory SC system-wide is a challenge precisely because of non-composability! Note also that strict serializability and linearizability are equivalent if we consider all the system's data to be a single object. ======================================== liveness multiple levels of nonblocking guarantees wait-free very strong; generally too expensive -- requires helping lock-free can be very fast in ad hoc cases obstruction-free moves progress out-of-band; can be quite simple All three levels are deadlock-free. Lock-free algorithms are livelock-free. Wait-free algorithms are starvation-free. Leader election (consensus) with CAS is wait-free. Treiber stack (and H&M list and M&S queue) is lock-free but not wait-free. SPSC queue above is wait-free. There's a natural obstruction-free deque (that isn't lock-free). Many SW TM systems are (only) obstruction-free. Anything can, in principle, be made wait-free, but the construction is messy. Intuition: - shared /announce/ array of high-level op descriptors, indexed by thread - per-object /responses/ array of result info, also indexed by thread - before I perform an op on obj. X I scan the two arrays and _help_ any op that hasn't completed yet - performing an op involves - indirection to root of every object - copying the whole thing -- or at least its "spine" - checking to make sure the copy is consistent - creating a new version - installing it with CAS - lots of messy race conditions. Also ABA. Helping isn't always necessary, though: witness SPSC queue. Also increment-only counter given above (inc self, return sum-of-scan). Important work in recent years (Petrank et al., etc.) has developed techniques to move helping off the common code path -- don't add self to announce array unless you fear you're starving; only check and help once in a while. The resulting algorithms tend to be pretty fast, though space remains linear in the number of threads in the system. ---------------------------------------- fairness LOTS of possible definitions _weak fairness_: any thread waiting for a condition that is continuously true eventually takes another step. _strong fairness_: any thread waiting for a condition that is true infinitely often eventually takes another step. Often impractical. How do we know that the scheduler doesn't pathologically only let me look when the condition is false? Most interesting real-world structures aren't even weakly fair. Even for, say, a wait-free queue, you can imagine an execution in which every one of my dequeues fails ("sorry, empty queue") -- even though other threads pass data in and out.