Michael L. Scott (speaker)
Wei Li
Sandhya Dwarkadas
Leonidas Kontothanassis*
Galen Hunt
Maged Michael
Robert Stets
Nikolaos Hardavellas
Wagner Meira
Alexandros Poulos
Michal Cierniak
Srinivasan Parthasarathy
Mohammed Zaki
University of Rochester
Computer Science Department
Rochester, NY 14627-0226
scott@cs.rochester.edu
* Digital Equipment Corporation
Cambridge Research Laboratory
The Cashmere project attempts to capture the ``knee of the curve'' in price-performance for shared-memory parallel computing: it exploits recent advances in local-area networks that provide low-latency, user-level access to remote locations in hardware, but implements coherence in software. The project has recently moved from simulation to implementation, using a 32-processor Alpha-server cluster (eight 4-processor nodes) on DEC's Memory Channel network. This talk will focus on the Cashmere implementation and on early experience as a Memory Channel field test site.
The Cashmere coherence protocol is characterized by
Simulation studies, reported at HPCA-1, Supercomputing '95, and HPCA-2, indicate that on an idealized remote-access network, the Cashmere protocol can achieve dramatic performance improvements over twin-and-diff-based distributed shared memory, and can in fact approach the performance of full hardware coherence. Two practical issues limit the extent to which we can duplicate these results in our prototype implementation. First, cross-sectional bandwidth in the first-generation Memory Channel, while impressive at approx. 100MB/sec, is far short of what can be achieved in tightly-coupled multiprocessors such as the T3E or Paragon. Second, OS overheads, which could in theory be relatively modest, are substantial in OSF-1. In particular, the need to create a separate ``memory object'' for every remote-mapped page places stresses on the VM system unanticipated by its designers.
Work on Cashmere is proceeding on several fronts: