CSC 2/458: Parallel and Distributed Systems Jan. 21 and 23, 2026 ** reading for 23 Jan: Sutter and Larus; SMS Chap. 1 (see schedule page) ** reading for 26 Jan: C++ threading tutorial; pthreads manual; PLP thread implementation material ** assignment 1 has been posted; due 1 Feb. (see the web or Blackboard) Why parallelism? speedup conceptual clarity, esp. for servers recall async models if you took 254 last fall physics: coping with real-world parallelism, distribution Physical levels Internet Data Center or supercomputer multi-socket server multicore processor multithreaded core ILP superscalar pipelined vectors, GPGPU circuits devices terminology: concurrent (logically parallel) parallel (physically parallel) distributed (far apart [whatever that means] -- usually no shared memory) NB: some authors use "concurrent" and "parallel" differently e.g., for what I call task parallel and data parallel Why might concurrency be hard? The interleaving problem exponential complexity in the general case Independent failures, at least in the distributed case Conceptual levels -- increasing expressivenss & complexity parallel libraries deterministic parallelism independent tasks e.g., quicksort or {f(x) | x in C} explicitly synchronized event-driven (concurrent, not nec. parallel) thread-based (shared variables) message-based low-level races (define) necessary to build the above or (in rare cases) to maximize performance BEWARE PREMATRE OPTIMIZATION Dimensions of the parallel programming design space hardware sequential ISA pipelined/superscalar parallel ISA vector/SIMD, VLIW/EPIC, GPGPU, other accelerators UMA, NUMA (remote caching?), NORMA SMP, CMP, SMT, chiplets, SoC programming model data parallel v. task parallel shared memory v. message passing split-merge v. fork-join etc. libraries v. languages v. language extensions ---------------------------------------- Administrivia prerequisites: CSC 254 and CSC 2/456, or equivalent web site: www.cs.rochester.edu/courses/258/spring2026/ intro TA Rebecca Salganik discussion group in Blackboard grading: probably no exams probably some quizes 2 or 3 whole-class programming assignments major individual or small-group projects optional presentations (reduces expectations for project) ** class participation more informal than most classes syllabus very loose; can adjust to fit interests of class I ABSOLUTELY EXPECT **EVERYONE** TO COME TO CLASS PREPARED AND TO PARTICIPATE ACTIVELY IN DISCUSSION. IF YOU CANNOT DO SO, PLEASE DO NOT ATTEND. Academic Honesty -- see the web site Note that use of Gen AI is permitted *** so long as you very clearly document how you used it *** core topics: - Implementation of threads. - Parallelization strategies: speedup, efficiency, Amdahl’s law, etc. - Synchronization: hardware primitives, clocks, mutual exclusion, transactions, nonblocking data structures, safe memory reclamation. - Parallel machine architectures: multicore and multithreaded chips; large-scale multiproccessors (with and without coherence); clusters; interconnection networks. - Coherence and consistency: hardware-level memory models, cache coherence protocols. - Parallel and distributed programming models and interfaces: language threads, pthreads, MPI, OpenMP, Cilk, TBB, sockets, remote procedure call (RPC), CUDA, Spark, transactional memory (TM), persistence. - Parallel semantics: memory models; consensus; the consensus hierarchy; safety (linearizability, serializability, etc.); liveness (nonblocking progress); laws of order. - Distributed semantics: fail-stop versus Byzantine failure models; the FLP theorem; the CAP theorem; two- and three-phase commits; Paxos and Raft; reliable group communication; checkpointing; message logging. In keeping with the multicore revolution and my own personal interests, the course this semester will be weighted somewhat toward shared memory parallelism. possible additional topics: - Parallel program optimization techniques: synchronization granularity, dependences, scheduling, load balancing. - Distributed file systems: NFS, xFS, Coda, etc. - Supercomputers and supercomputing clusters; vector, GPU, and TPU processing. - Data-parallel languages: HPF; co-array Fortran, UPC, and Titanium; Fortress, Chapel, and X10; Habañero. - Race detection and deterministic execution. - Parallel functional languages: Concurrent Haskell, Erlang, etc. - Parallelizing compilers. - RDMA networks: Infiniband, etc. - Software distributed shared memory. << brainstorm additional topics >> Rest of today's class will be review, to make sure we're all on the same page. Be sure to ask questions if you aren't following any of this. ASSIGNMENT 0 IS ON THE WEB; DUE BEFORE FRIDAY CLASS << survey: name, year (G/U), dept courses taken languages/compilers operating systems architecture experience with pthreads explicitly parallel languages (Java, C#, Ada, HPF, ...) Unix socket programming event-driven programming MPI multiprocessors topics you'd most like to see covered >> Assignment 1 is also on the web; due 1 Feb. ======================================== A little history early computers (<= 1940s) were single user, with busy-wait (polling) I/O. First motivation for concurrency/parallelism came from coping with devices. Busy waiting for devices wasted *very* expensive cycles switching between (batch) users on I/O (1950s) Allowed cycles to be used for somebody else while current application waited. This is concurrency -- multiprogramming -- but with no interaction between concurrent entities. asynchronous I/O interrupts (early to mid 1960s) Race conditions in accessing memory locations from normal code and interrupt drivers. *** First interacting concurrent entities programmable I/O (e.g. IBM channels -- mid 1960s) Nontrivial memory activity from device. *** First interacting _parallel_ entities interprocess communication in timesharing systems (early 1970s) Quasi-parallel *user* programs (concurrent [i.e. logically parallel] but not physically parallel). Internet servers did the same thing later that decade. networks led to truly parallel distributed programs (early 1970s) multiprocessors led to truly parallel non-distributed programs. Mid to late 1960s in high-end scientific and business machines. Early 1970s in academia. Small-scale multiprocessors (via multi-ported memory) by mid to late 1970s. Multicomputers and shared-bus multiprocessors by early 1980s. Network-based multiprocessors (BBN) by the mid 1980s. 1990s dominated by ILP -- clobbered scalability Hit limits on both power and ILP in early 2000s. Can't make uniprocessors faster, because we can't increase the clock rate. Enter SMTs and multicore. Communication/computation ratios back to near 1990 levels (because on chip), though scalability limited by power dissipation. Parallelism now everybody's problem: how many cores can we routinely use productively? Continued improvements have come largely from increased numbers of transistors on chip (more cores, bigger caches) and from computational accelerators: GPUs and more specialized devices -- notably, recently, TPUs/NPUs. ---------------------------------------- SMS Chap. 1 condition synch v. atomicity latter is harder: universal quantifier, rather than existential mutual exclusion (locking) the most common solution granularity -- complexity/concurrency tradeoff spinning v. blocking TAS lock (much better ones to come!) latter built on the former spin-then-yield is sort of a hybrid safety deadlock, e.g., w/ 2 locks (not the only source!) liveness starvation NB: typically assume that the underlying system runs all runnable threads << everybody understand the quantifiers on p. 8? >> safety: forall states S [P(S)] v. liveness: forall states S [P(S) -> exists state T [Q(T)]] where T is a successor of S or forall states S [P(S) -> exists state T [forall states U [R(U)]]] where U succeeds T which succeeds S ------------------------------------ << Exercise: brainstorm solutions to Dining Philosophers want safety atomicity on transitions between free and held by me hold both forks throughput eating no deadlock and liveness every hungry philosopher eventually gets to eat fairness: over time, no philosopher gets to eat more often than another equally hungry one SM v MP formulations >> ------------------------------------ build busy-wait locks w/ atomic instructions what instructions _are_ atomic? reads and writes only of historical interest only can solve mutual exclusion for n threads but cannot solve *wait-free consensus* (everybody knows who winner is within bounded time) for even 2 threads The first 2-thread solution was published by Dijkstra in 1965 and attributed to Theodorus Dekker. Peterson published a substantially simpler solution in 1981. (Note: this code assumes sequential consistency. It needs fences or atomic accesses to run correctly on a modern machine; more on this later.) class lock bit turn // 0 or 1 bool interested [2] := {false, false} lock.acquire(): other := 1 - self // self is 0 or 1 interested[self] := true turn := self while interested[other] && turn != other; // spin // if interested[other] == true and turn == other // then I set turn first, and I win lock.release(): interested[self] := false A tree of Peterson locks can be used for n-thread mutual exclusion. It takes O(lg n) time and O(n) space. An arguably more attractive solution was published by Lamport in 1987. It takes O(1) time in the absence of contention and O(n) time when threads collide. Also O(n) space in the absence of bounds on rates of progress: [ start: [ X := pid [ if Y <> free goto start [ Y := pid [ if X <> i [ /* Make sure no one else is in critical section [ several methods possible (none shown here); Lamport [ proposed two; both require O(n) time with n threads [ in the system; one required O(n) space, ther other [ bounds on relative rates of progress. */ [ if Y <> i goto start [ [ -- critical section [ [ Y := free Subsequent work by Hesselink, Buhr, and Dice [C&C:P&E, 2015] shows how to combine O(1) time in the no-contention case with O(log n) time in the high-contention case, with O(n) total space. But again, nobody does that in practice. atomic ops consensus number test-and-set (TAS) 2 swap 2 fetch-and-increment (FAI) 2 compare-and-swap (CAS) oo load-linked, store-conditional (LL/SC) oo test-and-set lock: type lock = Boolean := false procedure acquire(L : ^lock) repeat until test_and_set(L) = false procedure release(L : ^lock) L^ := false test-and-test-and-set lock: procedure acquire(L : ^lock) // "test-and-test-and-set" lock while test_and_set(L) = true repeat until L = false more on busy-wait synch later in the term emulation of arbitrary fetch-and-phi using CAS: procedure fetch_and_phi(L : location; P : value->value) repeat value old := *L value new := P(old) until CAS(L, old, new) return old or with LL/SC: procedure fetch_and_phi(L : location; P : value->value) repeat value old := LL(L) value new := P(old) until SC(L, new) return old or, if the HW won't let you compute P between LL and SC: procedure fetch_and_phi(L : location; P : value->value) repeat value old := *L value new := P(old) until LL(L) == old && SC(L, new) return old three key differences between CAS and LL/SC (1) You can have only one LL outstanding. (2) SC can fail spuriously. (3) CAS can't tell if the value in L has changed and then changed back since the load; SC can. LL/SC has a natural implementation in cache coherence protocols. CAS is found on z, x86, ia64, SPARC, and recent ARM LL/SC is found on MIPS, Alpha, Power, RISC V, and ARM Many machines provide additional fetch-and-phi ops, e.g. fetch-and-add (FAA). N threads can do an atomic increment in O(N) time with FAA; they may need O(N^2) with CAS. ---------------------------------------- where do threads come from? Review of 2/454 OS: multiplex kernel threads on HW threads user: multiplex user threads on kernel threads multi-step implementation coroutines ready list preemption w/ signal lock-out multicore w/ busy-wait locks Coroutines As in Simula and Modula-2. Covered in section 8.6 in PLP. Multiple execution contexts, only one of which is active. transfer(other): save all callee-saves registers on stack, including ra and fp *current := sp current := other sp := *current pop all callee-saves registers (including ra, but NOT sp!) return (into different coroutine!) Other and current are pointers to CONTEXT BLOCKs. Contains sp; may contain other stuff as well (priority, I/O status, accounting info, etc.) No need to change PC; always changes at the same place. Create new coroutine in a state that looks like it's blocked in transfer. (Or maybe let it execute and then "detach". That's basically early reply.) Run-until block threads on a single process Need to get rid of explicit argument to transfer. Ready list data structure: threads that are runnable but not running. reschedule: t : cb := dequeue(ready_list) transfer(t) To do this safely, we need to save 'current' somewhere. Two ways to do this. Suppose we're just relinquishing the processor for the sake of fairness (as in MacOS or Windows 3.1): yield: enqueue(ready_list, current) reschedule Now suppose we're implementing synchronization: sleep_on(q) enqueue(q, current) reschedule Some other thread/process will move us to the ready list when we can continue. Preemption Use timer interrupts (in OS) or signals (in library package) to trigger involuntary yields. Requires that we protect the scheduler data structures: yield: disable_signals() enqueue(ready_list, current) reschedule re-enable_signals() Note that reschedule takes us to a different thread, possibly in code other than yield. Invariant: EVERY CALL to reschedule must be made with signals disabled, and must re-enable them upon its return. disable_signals() if not sleep_on re-enable_signals() Multiprocessors Disabling signals doesn't suffice: yield: disable_signals() acquire(scheduler_lock) // spin lock enqueue(ready_list, current) reschedule release(scheduler_lock) re-enable_signals() disable_signals() acquire(scheduler_lock) // spin lock if not sleep_on release(scheduler_lock) re-enable_signals() ---------------------------------------- Sutter & Larus Depressing how little has changed since this appeared, over 20 years ago. Server concurrency a "solved problem"; client concurrency not << WHY? >> "nonhomogeneous code; finegrained, complicated interactions; and pointer-based data structures" Dimensions of client parallelism size of tasks degree of coupling of tasks << examples? >> Embarrassing parallelism change white balance of picture Regular parallelism convolution of picture Irregular parallelism mesh refinement task ("unstructured") parallelism game with threads for each character, for physical processes, for strategy, communication, stats gathering, ... Composability problem for locks lock leveling & hierarchies e.g.: all accounts at one level; I/O at a higher level lock-free (nonblocking) programming avoids preemption while holding lock, but doesn't compose not general purpose TM (stay tuned) Functional languages futures side effects :-( higher-level functions map, (commutative) reduce Debugging, performance analysis, ... race detection deterministic replay Heisenbugs ---------------------------------------- The A-B-A problem If memory is dynamically allocated, I have to worry that a CAS will succeed even when it shouldn't, because it points to a *new* block that happens to have the same address as a no-longer-existent block to which the pointer used to point. This is a serious problem for certain algorithms. Suppose in a Trieber stack I read T = tos, read N = tos->next, and then try to pop via CAS(tos, T, N). But just before my CAS I go to sleep. Somebody else comes along, does a bunch of pops & pushes, and leaves the stack pointing to the same node as before, but with a different /next/. When I wake up my CAS may succeed, even though N is the wrong value to use -- we may easily corrupt the stack. Counted pointers for CAS. LL/SC isn't vulnerable to A-B-A in the same way: SC fails if anybody wrote the word since the LL, even if they wrote the same value. General solution: safe memory reclamation (SMR) [PODC 2002; SMS 8.7]: "hazard pointers" -- Michael "repeat offenders problem (ROP)" -- Herlihy, Luchangco, Martin, and Moir Winners of the 2022 Dijkstra Prize Main drawback is W-R memory fence on every dereference. EBR and IBR as cheaper but less space efficient alternatives. (more later) ---------------------------------------- Race conditions A race condition (or just "a race") occurs when program behavior depends on the order in which events occur in different threads. Race conditions are not all bad; sometimes any of the possible program outcomes are ok (e.g. workers taking things off a task queue). Data races v. synchronization races essentially unannotated v. annotated: synchronization races are the expected ones, which the programmer tells the implementation to implement correctly classic T1: x++ T2: x++ is a data race growing consensus that data races are bugs another example: initialization // ready == false p = new foo(args) ready = true while (!ready) {} // use *p butterfly "causality" example // x == y == 0 y = 1 x = 1 a = x b = y a == b == 0 ? IRIW example: x == y == 0 x = 1 a = x c = y y = 1 b = y d = x b == d == 0 && a == c == 1 ? Here forcing order in the middle two threads isn't enough. The problem is _write atomicity_ (more on this later) Note that data races are unavoidable in the _implementation_ of synchronization objects. So we have to understand how they work (and how to control them) if we're going to build such objects. Example: Peterson's alg. Can't be written safely as shown above. Must replace class lock bit turn // 0 or 1 bool interested[2] := {false, false} lock.acquire(): other := 1 - self // self is 0 or 1 interested[self] := true turn := self while interested[other] && turn != other; // spin lock.release(): interested[self] := false with class lock atomic bit turn // 0 or 1 atomic bool interested[2] := {false, false} lock.acquire(): other := 1 - self // self is 0 or 1 interested[self].store(true, ||W) turn.store(self, ||R) while interested[other].load(||R) && turn.load(||) != other; // spin fence (R||RW) lock.release(): interested[self].store(false, RW||) special load and store are mutually sequentially consistent; fence orders wrt subsequent ordinary loads and stores. Lots more on this later! Modern languages are converging on semantics (MEMORY MODELS) that say circularity never occurs in "properly synchronized" (data race free) programs.