The Memory Bandwidth Bottleneck and its Amelioration by a Compiler
  
  
  Chen Ding and Ken Kennedy
  
  
                            ABSTRACT 
  
  As the speed gap between CPU and memory widens, memory hierarchy has
  become the primary factor limiting program performance.  Until now,
  the principal focus of hardware and software innovations has been
  overcoming latency.  However, the advent of latency tolerance
  techniques such as non-blocking cache and software prefetching begins
  the process of trading bandwidth for latency by overlapping and
  pipelining memory transfers.  Since actual latency is the inverse of
  the consumed bandwidth, memory latency cannot be fully tolerated
  without infinite bandwidth.  This perspective has led us to two
  questions.  Do current machines provide sufficient data bandwidth?  If
  not, can a program be restructured to consume less bandwidth?  This
  paper answers these questions in two parts.  The first part defines a
  new {\it bandwidth-based} performance model and demonstrates the
  serious performance bottleneck due to the lack of memory bandwidth.
  The second part describes a new set of compiler optimizations for
  reducing bandwidth consumption of programs.  These techniques are
  bandwidth-minimal loop fusion, storage reduction by array peeling
  and shrinking, and store elimination.  
  
  
  Download the paper in  pdf or in  postscript format.
  
   Copyright notice