IMPLEMENTATION OF CASHMERE
Michael L. Scott
Computer Science Department
University of Rochester
Workshop on Scalable Shared-Memory Multiprocessors
October 1996
Outline
- Motivation
- Key Ideas
- Memory Channel Overview
- Implementation Overview
- Current Work
- Future Plans
Cashmere People
Motivation
- Shared memory programming model
- Wide hardware spectrum
- DSM/SVM
- full hardware coherence
- options in-between
Hardware is faster, but software
- is cheaper
- can be built faster (sooner to market, faster processors)
- can use more complex protocols
- is easier to tune/fix/enhance
- is easier to customize
The Price/Performance Curve
Q: How should coherence work given very low latency user-level messages?
Key Ideas in Cashmere
- multi-writer release-consistent
- write-through to home copy
- no twins and diffs (cf. AURC)
- dynamic choice of home node
(first touch after initialization)
- directories
- no intervals and timestamps
- Each node has:
- sharing set for pages for which it is home
- write notices for remote pages that are currently mapped
- Wait for write-through at release
- send write notices
- Invalidate as necessary at acquire
- Re-map on page fault
Works very well in simulation:
BUT:
This was simulation, on a network that doesn't match any current
commercial hardware.
--> Implementation based on Digital's
Memory Channel for PCI
- 4x8 processor testbed
- remote-write API (no reads)
- I/O space (uncached)
- no inter-node coherence, but NI coherent with local processors
- global address space; both ends mapped
- VM protection
- 4 us user-user latency
- ~30 MB/sec per-link bandwidth (~130 MB/sec aggregate)
The Good News
- reasonable per-link bandwidth
- very low latency
- low cost (for the network anyway :-)
- hardware quite reliable
- multicast capability
The Bad News
- no remote reads
- low aggregate bandwidth (at present)
- high page fault and signal overhead in OSF Unix
- some resource recovery problems in early kernel software
Cashmere MC Highlights
As in simulation, except:
- copy to local on page fault (HPCA 96)
- receive mapping on home node; transmit mapping elsewhere
- ``doubled writes'' in software (not stack or global refs)
- replicated directory
- read locally
- no lock needed -- entries small
- broadcast updates (HW support)
How Should We Read Pages?
- interrupts (ouch)
- v. protocol processor (current)
- v. SW polling (future?) (cf. Shasta)
A protocol processor can also:
- batch-execute directory modifications (reduction of sharing set) on
behalf of remote acquirer
- do most of the work at local release
- propagation of write notices
- updates of directory entries
- compute processor needn't stall
- track access patterns and prefetch?
Current Priorities
- performance tuning (and bug fixes!)
- comparison of
- TreadMarks (intervals and diffs)
- Cashmere (directories and write-through)
Other options:
- AURC (intervals and write-through)
- directories and diffs
- multi-level coherence protocol
- exploit HW coherence within each node
- problem: how to reconcile remote changes with copy in use by
another local processor
- message-based on larger net
Other Future Work
- compiler integration
- prescriptive (e.g. to aggregate messages)
- descriptive (hints)
- fault tolerance (good results for TreadMarks)
- heterogeneity
- threads
- network interface design
- OS issues (scheduling, placement, naming, etc.)
Link to main Cashmere project page