CSC 2/458
Parallel and Distributed Systems
Spring 2019.
Course Description
With the explosion of the Internet over the past 25 years, and with the
proliferation of PC clusters in the server/data center marketplace,
distributed computing has become central to most of computer
science. It also remains the dominant computing paradigm in very
high-end scientific computation.
With the end of Denard scaling and the rise of multicore some 15 years
ago, shared memory parallelism has become similarly ubiquitous in the
desktop/laptop/cell phone market.
Almost every nontrivial program today is multithreaded.
CSC 2/458 is a loosely structured course devoted to all aspects of
parallel and distributed systems.
Core topics to be covered include
-
Implementation of threads.
-
Parallelization strategies: speedup, efficiency, Amdahl’s
law, etc.
-
Synchronization: hardware primitives, clocks, mutual exclusion,
transactions, nonblocking data structures.
-
Parallel machine architectures: multicore and multithreaded chips;
large-scale multiproccessors (with and without coherence); clusters;
interconnection networks.
-
Coherence and consistency: hardware-level memory models, cache
coherence protocols.
-
Parallel programming models and interfaces: language threads,
pthreads, MPI, OpenMP, Cilk, TBB, sockets, remote procedure call
(RPC), transactional memory (TM), determinism.
-
Parallel semantics: memory models; consensus; the consensus hierarchy;
safety (linearizability, serializability, etc.); liveness
(nonblocking progress).
-
Fault tolerance and reliability: fail-stop versus Byzantine failure
models; the FLP theorem; two- and three-phase commits; Paxos and
Raft; reliable group communication; checkpointing; message logging. 
In keeping with the multicore revolution and the
instructor’s current interests, the course this semester will be
weighted somewhat toward shared memory parallelism.
Prerequisites:
CSC 2/454 and 2/456 or equivalent.
Warning:
This will be a time-intensive and discussion-heavy class. We will
be drawing on a wide range of written material, including multiple
journal and conference papers.
Reading is mandatory and must be completed in
advance.
Barring illness and similar issues, class attendance is also mandatory.
If you can’t commit to being present and
prepared for each class session, please don’t take the
course.
Additional topics will depend to some degree on the interests of
participants. Possibilities include
-
Parallel program optimization techniques: synchronization
granularity, dependences, scheduling, load balancing.
-
Distributed file systems: NFS, xFS, Coda, etc.
-
Supercomputers and supercomputing clusters; vector and GPGPU
processing.
-
Data-parallel languages: HPF; C*; Split-C; co-array Fortran, UPC,
and Titanium; Fortress, Chapel, and X10.
-
Race detection and deterministic execution.
-
Parallel functional languages: Concurrent Haskell, Erlang, etc.
-
Parallelizing compilers.
-
RDMA networks: Infiniband, etc.
-
Software distributed shared memory.
-
Component models: CORBA, .NET, JavaBeans.
Back to the course home page
Last Change:
06 January 2019 /