ECE 2/406, CSC 2/466: Introduction to Parallel Computing using GPUs

Organization

Instructor (Lectures) Sreepathi Pai (sree at cs dot rochester dot edu)
Instructor (Labs) Alex Page (alex.page at rochester dot edu)
Class Location (both lectures/labs) CSB 523
Time MW 1650--1805
Office Location and Hours (Pai) 3409 Wegmans, by appointment, but you're welcome to drop by between 10AM and 5PM and see if I'm around

Academic Honesty

All assignments and activities associated with this course must be performed in accordance with the University of Rochester's Academic Honesty Policy. More information is available at: http://www.rochester.edu/college/honesty.

Description

(from CDCS) GPU micro-architecture, including global memory, constant memory, texture memory, SP, SM, scratchpad memory, L1 and L2 cache memory, multi-ported memory, register file, and task scheduler. Parallel programming applications to parallel sorting, reduction, numeric iterations, fundamental graphics operations such as ray tracing. Desktop GPU programming using Nvidia's CUDA (Compute-Unified Device Architecture). CPU/GPU cooperative scheduling of partially serial/partially parallel tasks.

(mine) Parallel programming is necessary to obtain performance on modern computers. Graphics Processing Units (GPUs) are processors that support massive parallelism. In this course, we will learn how to parallelize programs and run them on the GPU. Since the GPU is a fairly primitive processor, getting good performance on the GPU is hard compared to CPUs and requires programmers to be highly knowledgeable of the internals of GPU architecture. This course will cover NVIDIA's CUDA programming language, and all the internals of NVIDIA GPUs required to write fast programs.

Goals

Students who take this course will be able to: Mastery of these goals must be demonstrated by building a project.

Pre-requisites

ECE 200, or ECE 216, or ECE 201/401, or equivalent. Familiarity with assembly language and C programming language. Instructor approval.

Grading

There will be no mid-term or final exams for this course.

There will be 4 programming assignments (65%) of the grade, and 1 project (35%) of the grade.

Students will be expected to present their project to the rest of the class and also submit a project report.

Late Submissions

Beyond 1 day will not be graded except at instructor's discretion.

Within 1 day of due date will be penalized 10% of the grade.

Required and Recommended Materials

There are no required textbooks for this class.

The following resources are useful general references:

  1. NVIDIA CUDA C Programming Guide (8.0)
  2. CUDA C Best Practices Guide

Lecture-wise resources are given below in the schedule.

Schedule (Tentative and subject to change)

Date Topic Assignments
September 4 Holiday (Labour Day)
September 6 Introduction
September 11 Lab (Introduction to pthreads)
September 13 Synchronization (Guest lecture: Prof. Scott)
September 18 Lab
September 20 Parallelizing Programs
September 25 Lab
September 26 A1 (rel. Sep 27)
September 27 Understanding Memory Performance
October 2 Lab
October 4 Optimizing for Caches
October 6 8 A1 due 7PM
October 9 Holiday (Fall Term Break)
October 11 Introduction to GPUs Project proposal discussion period starts
October 16 Lab A2
October 18 CUDA Programming
October 23 Lab
October 25 GPU Architecture I (Execution)
October 27 A2 due
October 30 Lab A3 (released Nov 2)
November 1 GPU Architecture II (Memory) Project proposal due (extended to November 7)
November 6 Lab
November 7 Project proposal due
November 8 Synchronization and Communication
November 12 A3 due, A4
November 13 Lab
November 15 Heterogeneous Parallelism Project report draft #1 due
November 20 Lab
November 21 A4 due
November 22 Holiday (Thanksgiving Break) A3 due (extension for BlueHive outage)
November 27 Lab
November 29 High Level GPU programming
December 1 Project report draft #2 due, A4
December 4 Lab
December 6 Wrap-up and Project reviews
December 11 Project Presentations
December 13 A4 due

Last updated: 1 Dec 2017