Improving Cache Performance in Dynamic Applications through
Data and Computation Reorganization at Run Time
Chen Ding and Ken Kennedy
ABSTRACT
With the rapid improvement of processor speed, performance of the
memory hierarchy has become the principal bottleneck for most
applications. A number of compiler transformations have been developed
to improve data reuse in cache and registers, thus reducing the total
number of direct memory accesses in a program. Until now, however,
most data reuse transformations have been static---applied only
at compile time. As a result, these transformations cannot be used to
optimize irregular and dynamic applications, in which the data layout
and data access patterns remain unknown until run time and may even
change during the computation.
In this paper, we explore ways to achieve better data reuse in
irregular and dynamic applications by building on the
inspector-executor method used by Saltz for run-time
parallelization. In particular, we present and evaluate a
dynamic approach for improving both computation and data locality in
irregular programs. Our results demonstrate that run-time program
transformations can substantially improve computation and data
locality and, despite the complexity and cost involved, a compiler can
automate such transformations, eliminating much of the associated
run-time overhead.
Download the paper in pdf or in postscript format.
Copyright notice