Coherence Algorithms for SHared MEmory aRchitectures
The CASHMERe Project
CASHMERe stands for "Coherence Algorithms for SHared MEmory
aRchitectures" and is an ongoing effort to provide efficient,
scalable, shared memory with minimal hardware support. It is well
accepted today that commercial workstations offer the best
price/performance ratio and that shared memory provides the most
desirable programming paradigm for parallel computing. Unfortunately
shared memory emulations on networks of workstations provide
acceptable performance for only a limited class of applications.
CASHMERe attempts to bridge the performance gap between shared memory
emulations on networks of workstations and tightly-coupled
cache-coherent multiprocessors while using minimal hardware support.
In the context of CASHMERe, early simulation results indicate that
NCC-NUMA (Non Cache
Coherent Non Uniform Memory Access) machines can greatly improve the
performance of DSM systems, and
approach that of fully hardware
coherent multiprocessors. The basic property of NCC-NUMA systems
is the ability to access remote memory directly; such a capability is
offered by a variety of network interfaces including DEC's
Memory Channel,
HP's
Hamlyn, the
VIA standard, the
Scheduled Transfer standard,
and the Princeton Shrimp. Given current
technology the additional hardware cost of NCC-NUMA systems over pure
message passing systems is minimal. Based on this fact and our
performance results we believe that NCC-NUMA machines lie near the knee
of the price-performance curve.
The department of Computer
Science at the University of
Rochester has built a 32 processor
Cashmere prototype. Significant part of the funding comes in the form
of an equipment grant from Digital Equipment
Corporation. The prototype consists of eight 4-processor DEC 4100
4/600 multiprocessors with 2GB of memory each, connected by a Memory
Channel network. The Memory Channel plugs into any PCI bus. It
provides a memory-mapped network interface with which processors can read
and write remote locations without kernel intervention or inter-processor
interrupts. End-to-end bandwidth is currently about 70MB/sec; remote write
latency is about 3.5us; cross-sectional bandwidth is approximately
500MB/sec. Cashmere augments the functionality of the Memory Channel by
providing cache coherence in software.
As Cashmere has matured, more and more effort is being directed toward
the follow-on InterAct project.
The people behind CASHMERe are
- Faculty
- Alumni
Other Presentations
-
“Programming Models for
Parallel and Distributed Systems,”
by M. L. Scott. Panel session at the
10th Intl.
Conf. on Architectural Support for Programming Languages and
Operating Systems, San Jose, CA, Oct. 2002.
-
Main
presentation and
panel remarks from the
Workshop on Communication and Middleware for Parallel
Programming Models, held in conjunction with
IPDPS 2002.
-
“Is S-DSM
Dead?,” by M. L. Scott. Invited keynote address,
2nd Workshop on
Software Distributed Shared Memory, Santa Fe, NM, May 2000.
-
“Shared
State in Distributed Systems,”
by M. L. Scott. Position paper from the NSF Workshop on New
Challenges and Directions for Systems Research, St. Louis, MO, July
1997. Also available: Group report on Wide-Area Network
Resource Management.
Annual reports to the NSF Experimental Software Systems Program
Grant CCR-9705594, 6-1-1997 to 5-31-2001.
-
L. Kontothanassis, R. Stets, G. Hunt,
U. Rencuzogullari, G. Altekar, S. Dwarkadas, and M. L. Scott.
Shared Memory Computing on Clusters with Symmetric Multiprocessors and
System Area Networks,
ACM Transactions on Computer Systems, Aug. 2005.
PDF.
-
R. Stets, D. Chen, S. Dwarkadas, N. Hardavellas, G.C. Hunt,
L. Kontothanassis, G. Magklis, S. Parthasarathy, U. Rencuzogullari,
M.L. Scott. The Implementation of Cashmere, TR723,
UR Comp. Sci. Dept., Dec. 1999.
Compressed
Postscript.
-
R. Stets, S. Dwarkadas, L. Kontothanassis, U. Rencuzogullari, M. L. Scott.
"The Effect of Network Total Order and Remote-Write Capability on
Network-based Shared Memory Computing."
In Proc., HPCA '00, Toulouse, France, January 2000.
Compressed
postscript.
Presentation slides:
PowerPoint;
PDF.
Earlier version available as
University of Rochester Technical Report 711, February, 1999.
Compressed postscript.
-
S. Dwarkadas, N. Hardavellas, L. Kontothanassis, R. Nikhil, and R. Stets.
"Cashmere-VLM: Remote Memory Paging for Software Distributed Shared Memory".
In Proc., IPPS '99, San Juan, Puerto Rico, April, 1999.
PDF
-
S. Dwarkadas, K. Gharachorloo, L. Kontothanassis, D. Scales,
M. L. Scott, and R. Stets.
"Comparative Evaluation of Fine- and Coarse-Grain Software Distributed
Shared Memory".
In Proc., HPCA '99, Orlando, FL, February, 1999.
-
N. Hardavellas, L. Kontothanassis, R. Nikhil, and R. Stets.
"Software Cache Coherence with Memory Scaling".
In the Seventh Workshop on Scalable Shared Memory Multiprocessors,
held in conjunction with ISCA '98.
-
S. Dwarkadas, K. Gharachorloo, L. Kontothanassis, D.
Scales, M. Scott, and R. Stets.
"Comparative Evaluation of Fine- and Coarse-Grain Approaches for
Software-based Distributed Shared Memory".
In the Seventh Workshop on Scalable Shared Memory Multiprocessors,
held in conjunction with ISCA '98.
- Robert Stets, Sandhya Dwarkadas, Nikolaos Hardavellas, Galen Hunt, Leonidas Kontothanassis,
Srinivasan Parthasarathy, Michael Scott, "CASHMERE-2L: Software Coherent Shared
Memory on a Clustered Remote-Write Network". In Proc., SOSP '97, Saint
Malo, France, October 1997.
Compressed Postscript,
HTML;
Conference Slides (PDF)
-
Leonidas Kontothanassis, Galen Hunt, Robert Stets, Nikolaos Hardavellas,
Michal Cierniak, Srinivasan Parthasarathy, Wagner Meira, Sandhya
Dwarkadas, and Michael Scott.
``VM-Based Shared Memory on Low-Latency, Remote-Memory-Access Networks''.
In Proc., ISCA 1997, Denver, CO, June 1997. Also TR 643, Computer
Science Department, University of Rochester, November 1996.
Compressed
Postscript;
Conference Slides (PDF)
-
M. L. Scott, W. Li, L. Kontothanassis, G. Hunt, M. Michael, R. Stets,
N. Hardavellas, W. Meira, A. Poulos, M. Cierniak, S. Parthasarathy,
and M. Zaki.
``Implementation of Cashmere''.
Workshop on Scalable Shared Memory Multiprocessors,
Boston, MA, October 1996.
Presentation slides (HTML)
-
G. C. Hunt and M. L. Scott.
``Using Peer Support to Reduce Fault-Tolerant Overhead in
Distributed Shared Memories''.
TR 626, Computer Science Department, University of Rochester, June
1996.
Compressed
Postscript
- L. I. Kontothanassis and M. L. Scott.
``Efficient Shared Memory with Minimal Hardware Support''. In
Computer Architecture News, September 1995.
Compressed
Postscript
- L. I. Kontothanassis and M. L. Scott.
``Using Memory-Mapped Network Interfaces to Improve the Performance of
Distributed Shared Memory''. In Proc., 2nd HPCA, San Jose, CA,
February 1996.
Compressed
Postscript
- L. I. Kontothanassis, M. L. Scott, and R. Bianchini. ``Lazy Release Consistency for
Hardware-Coherent Multiprocessors''. In Proc., SUPERCOMPUTING '95,
San Diego, CA, December 1995.
HTML
- L. I. Kontothanassis and M. L. Scott.
``Software Cache Coherence for Current and Future
Architectures''. In Special JPDC Issue on Scalable Shared Memory,
November 1995, V29, N2, pp 179-195.
Compressed Postscript
- L. I. Kontothanassis and M. L. Scott.
``Software Cache Coherence for Large Scale Multiprocessors''. In
Proc., 1st HPCA, Raleigh, NC, January 1995.
Compressed
Postscript
- M. Marchetti, L. I. Kontothanassis, R. Bianchini, and
M. L. Scott.
``Using Simple Page Placement Policies to Reduce the Cost of Cache
Fills in Coherent Shared-Memory Systems''. In Proc., IPPS '95,
Santa Barbara, CA, April 1995.
Compressed
Postscript
-
``Apparatus and Method for Maintaining Data Coherence Within a Cluster of
Symmetric Multiprocessors,'' by L. Kontothanassis, M. L. Scott,
R. Stets, S. Dwarkadas, N. Hardavellas, and G. Hunt. US patent
#6,341,339. Submitted 26 March 1998; awarded 22 January 2002.
URCS Home Page
Last Change: 13 October 2006 /
scott@cs.rochester.edu