CSC 2/458
Parallel and Distributed Systems
Spring 2022
Both parallel and distributed systems can be defined as a collection of
processing elements that communicate and cooperate to achieve a common goal.
Advances in processor technology have resulted in today's
computer systems using parallelism at all levels: within
each CPU by executing multiple instructions from the same thread
of control simultaneously (superscalar architectures/instruction-level parallelism);
by executing multiple intructions from different threads of control
simultaneously (simultaneous multithreading); by introducing multiple
cores in a single chip (chip multiprocessors); by using multiple
chips to form multiprocessors; or via multiple networked nodes to form
a cluster; making parallel systems increasingly ubiquitous.
Simultaneously, advances in networking technology have created
an explosion of distributed applications, making distributed computing
an inherent fabric in our day-to-day lives.
This course will focus on the principles of parallel and distributed systems
and the implementation and performance issues associated with them.
We will examine programming models/interfaces to parallel and
distributed computing,
interprocess communication, synchronization and consistency models,
fault tolerance and reliability, distributed process management,
parallel machine architectures, parallel program optimization,
and the interaction of the compiler, run-time, and machine architecture.
This course follows the College credit hour policy for four-credit courses. This course meets
twice weekly for three academic hours per week.
The course also includes independent out-of-class assignments and activities for an average of one
academic hour per week.
These activities include reading academic papers for in-class discussion,
researching a specific course topic for in-class presentation of the material,
and group meetings to discuss project design, tools, and/or application domains for a term project.
Class time: 3.25---4.40 p.m., Mondays and Wednesdays.
Class location: Dewey Room 2110E or online (zoom link via blackboard)
Instructor:
Sandhya Dwarkadas
e-mail: sandhya at cs
Office: Wegmans 3403, 275-5647
Office hours: by appointment; feel free to reach out via e-mail.
TA:
Andrew Sexton
e-mail: asexton2 at cs
Office hours: Mondays and Thursdays 1-2 p.m.
Questions and Answers
E-mail is best. Please use the class
discussion board to post
questions or information of general interest. When appropriate,
I will use a class e-mail list to disseminate information/instructions.
Prerequisites:
CSC 252 or equivalent, and C/C++ programming experience
under Unix. Also, CSC 254 and CSC 256 is recommended.
Material we will use:
There is no single required textbook for this course. Please see the
class for pointers to slides and readings.
The current content reflects previous offerings.
The exact content covered during the
semester will depend to some extent on the interests of the students.
In addition to papers covering the state-of-the-art,
we will draw material for the course from several sources, the main ones
being (these books have been placed on a 2 hour reserve at Carlson and
some are available on-line (links accessible via the course blackboard page):
Distributed Systems, 2017 Edition: Maarten van Steen and Andrew S. Tanenbaum
Parallel Computer Architecture, A Hardware/Software Approach, 1999 Edition:
David E. Culler, Jaswinder Pal Singh, and Anoop Gupta
Introduction to Parallel Computing, Anantha Grama, George Karypis,
Vipin Kumar, and Anshul Gupta (Publishers: Addison-Wesley)
Distributed Systems: Concepts and Design, third edition, George Coulouris, Jean Dollimore, and Tim Kindberg (Publishers: Addison-Wesley)
The Art of Multiprocessor Programming: Maurice Herlihy and Nir Shavit (Publishers: Morgan Kaufmann)
High Performance Compilers for Parallel Computing, 1996 Edition:
Michael Wolfe
Shared-Memory Synchronization, Michael L. Scott
Optimizing Compilers for Modern Architectures, 2002 Edition:
Randy Allen and Ken Kennedy
Foundations of Multithreaded, Parallel, and Distributed Programming, 2000 Edition: Gregory R. Andrews (Publishers: Addison-Wesley)
(Some) Topics Covered:
Basics of parallelization and parallelization strategies
Parallel/distributed programming models and interfaces -
shared memory vs. message
passing vs. remote procedure call (RPC) vs. global address space
languages: e.g., pthreads, MPI, MapReduce, OpenMP, HPF, UPC,
language-level threads (e.g., Java)
Parallel machine architectures - shared and distributed memory machines,
multicore and multithreaded chips, interconnection networks
Parallel program optimization techniques - synchronization granularity,
dependences, scheduling, load
balancing
Synchronization - hardware primitives, logical and physical clocks,
mutual exclusion, distributed transactions, transactional memory
Consistency and coherence - data-centric versus client-centric consistency
models, cache coherence protocols
Fault tolerance and reliability - fail-stop versus byzantine failure models,
two- and three-phase commits, reliable group communication, checkpointing,
message logging
Assignments and Grading
There will be two or three small programming assignments, a couple of
written homework assignments, a term project, and
potentially one exam. There may also be occasional spot quizzes.
The course will consist of a combination of lectures
and student presentations. Grading will be based on the
assignments, project, presentations,
exam (maybe)/quizzes, and class participation and attendance.
The tentative grading scheme is as follows:
- 30% class participation and attendance, class presentation(s)
- 20% programming assignments
- 30% term project
- 20% homeworks/quizzes/exams (the latter if any)
Academic Honesty Policy:
For homeworks and programming assignments,
students are encouraged to consult each other, the TA, the
instructor, or anyone else for that matter. However, the assistance
offered or accepted should not go beyond a discussion of the problem
and a sketch of a solution. You can use the following guideline:
when it comes time for you to write your program or your homework
paper, do not use any written material from the discussion. If you
can reconstruct the discussion and complete the solution on your own,
then you have learned the material (and that is the objective of this
course!). For team projects, you should make sure
to identify division of labor in your README.
While projects will generally be graded as a team rather than separately
for each individual, corrective action could be taken.
Posting homework and project solutions to public repositories on sites like GitHub is a violation of the College’s Academic Honesty Policy, Section V.B.2 “Giving Unauthorized Aid.”
If you do start with someone else's code (e.g., you download a procedure from the web so that you can get the rest of your project working, or you build on someone else's work or program), you must (1) either have the author’s explicit permission or the material must be publicly available, and (2) label what parts of your code you copied and from where, clearly and prominently, when you hand it in. Note that you will get credit only for the parts of your assignment that you implemented yourself. Written assignments are similarly expected to be in your own words, and must include appropriate citation to content that you might be summarizing. Any direct use of someone else's words or pictures requires quotation and attribution.
Quizzes and exams (if any) must be strictly individual work.
Links to Relevant Documentation
Programming Posix Threads - a tutorial from Lawrence Livermore National Lab
MPI - The Message Passing Interface (MPI) standard and tutorials
Cashmere
- Overview and Documentation
Paper (and Chapter) Reading List
Section 8.6,
“Coroutines”, and Chapter 12 (Section 12.2),
“Concurrency”, from
Programming Language Pragmatics, by Michael L. Scott.
Morgan Kaufmann Publishers, 2000.
Chapter 1 of Culler, Singh, and Gupta, as well as Tanenbaum and van Steen.
Chapters 2 and 3 of Culler, Singh, and Gupta
The Performance Implications of Thread Management Alternatives for
Shared-Memory Multiprocessors, T. J. Anderson, E. D. Lazowska, and H. M. Levy, IEEE Transactions on Computers, 38 (12), December 1989.
Algorithms for Scalable Synchronization on Shared Memory Multiprocessors,
John Mellor-Crummey and Michael L. Scott,
ACM Transactions on Computer Systems, 9(1):21-65, February 1991.
Capriccio: Scalable Threads for Internet Services , R. Behren, J. Condit, F. Zhou, G. C. Necula, and E. Brewer, Symposium on Operating Systems
Principles , October 2003.
Shared Memory Consistency Models: A Tutorial,
Sarita Adve and Kourosh Gharachorloo, Rice TR 9512, also appeared in
IEEE Computer, December 1996.
Architecture and Design of AlphaServer GS320 ,
Kourosh Gharachorloo, Madhu Sharma, Simon Steely, and Stephen Van Doren,
International Conference on
Architectural Support for Programming Languages and Operating Systems, 2000.
POWER4 System Microarchitecture , white paper, IBM Server Group,
October 2001.
The Sun Fireplane System Interconnect ,
Alan Charlesworth, Supercomputing Conference, November 2001.
TreadMarks: Shared Memory Computing on Networks of Workstations,
A. Amza, A. L. Cox, S. Dwarkadas, P. Keleher, H. Lu, R. Rajamony, W. Yu, and W.
Zwaenepoel,
IEEE Computer, February 1996.
Cashmere-2L: Software Coherent Shared Memory on a Clustered Remote-Write Network,
R. Stets, S. Dwarkadas, N. Hardavellas, G. Hunt, L. Kontothanassis,
S. Parthasarathy, and M. L. Scott,
Symposium on Operating Systems Principles,
October 1997.
Implementing Remote Procedure Calls, A.D. Birrell and B.J. Nelson,
ACM Transactions on Computer Systems,
Vol. 2, No. 1, pp. 39-59, February 1984.
Spinglass: Secure and Scalable Communications Tools for
Mission-Critical Computing , Kenneth P. Birman, Robbert
van Renesse and Werner Vogels, International
Survivability Conference and Exposition, DARPA
DISCEX-2001, Anaheim, California, June 2001.
A Survery of Rollback-Recovery Protocols in Message-Passing Systems,
E. N. Elnozahy, L. Alvisi, Y. Wang, and D. B. Johnson,
ACM Computing Surveys,
34:3, pp. 375-408, September 2002.
IO-Lite: A Unified I/O Buffering and Caching System ,
V. Pai, P. Druschel,and W. Zwaenepoel,
Proceedings of the Third Operating Systems Design and Implementation
Symposium, pp.15-28, February 1999.
The Costs and Limits of Availability for Replicated Services ,
Haifeng Yu and Amin Vahdat,
Proceedings of the Eighteenth ACM Symposium on Operating
Systems Principles (SOSP), October 2001.
The Horus and Ensemble Projects: Accomplishments and Limitations ,
Ken Birman, Robert Constable, Mark Hayden, Christopher Kreitz, Ohad Rodeh,
Robbert van Renesse, Werner Vogels.
Proc. of the DARPA Information Survivability
Conference & Exposition (DISCEX '00), January 25-27 2000 in Hilton Head,
South Carolina.
Fundamental Challenges in Mobile Computing ,
Principles of Distributed Computing, 1995.
Locality-Aware Request Distribution in Cluster-based Network Servers ,
Vivek Pai, Mohit Aron, Gaurav Banga, Michael Svendsen, Peter Druschel,
Willy Zwaenepoel, and Eric Nahum,
8th International Conference on Architectural Support for
Programming Languages and Operating Systems, October 1998.
Parallelization of General Linkage Analysis Problems,
S. Dwarkadas, A.A. Schaffer, R.W. Cottingham, A.L. Cox, P. Keleher,
and W. Zwaenepoel,
Human Heredity, Vol. 44, pp. 127-141, July 1994.
A Survey of Synchronization Methods for Parallel Computers,
Anne Dinning, IEEE Computer, July 1989.