CS 200 Project III
Software Adaptive Cache (SAC)
This is an open-ended research project. The basic question is whether
program-specific, software-managed cache memory can be more effective than
current hardware caching schemes. You need to form a group
of two to four people and complete the project in two phases.
Unlike previous projects, you are free to choose your group members, you
need to use C/C++ to write programs, you will have all test data from the
beginning, and you cannot use any other groups' code.
-
Updates
-
Changes have been made to Cache.h to add a new member function Cache::GetSize(),
which returns the size of cache. (4/12/02)
-
Group size can be two to four. The requirement is the same for all groups.
However, if a two-person group accomplishes as much work as a four-person
group, the former will receive bonus points. (4/13/02)
-
A new member function, Cache::CacheRead(unsigned) is added to Cache interface,
which reads the content of a cache element without modifying its info field.
Cache elements are now initialized to be MAXINT-1. (4/16/02)
-
Group list is here. (4/17/02)
-
Turn-in requirement for Phase I (4/17/02)
-
A file titled "REPORT" that contains
-
Minimal miss rate from Belady and execution time of your implementation
-
Miss rate and average search time of LRU replacement
-
The file can be in text, postscript, or pdf format with appropriate file-name
extension.
-
Program files with a user manual as in previous projects. The main
purpose is for verification. TAs need to be able to reproduce your
results and also ensure that there are no illegal copying of other people's
programs.
-
Specific requirements and evaluation function for CacheAccess are
added.(4/23/02)
-
System setup
-
Phase I
-
Phase II
-
Trace files
System Setup
A data-access trace is a sequence of integers. An integer
i
means an access to the i'th data element in a program. It
is possible that a program may not access all its elements.
Cache consists of an array of cache elements, each holds one
data element.
An instance of adaptive cache consists of two parts: a generic
cache module (Cache) and a user-defined function (CacheAccess).
The generic module imports CacheAccess to manage placement, search,
and replacement of data elements in cache. CacheAccess performs
the management using interface functions of the generic cache module.
The interface functions of Cache include cache read, write, exchange,
and report. CacheAccess cannot change the internal state of
Cache
without going through these interface functions. The
Cache
interface is defined as follows.
Each cache element stores two attributes
in class CacheElem.
class CacheElem {
public:
// Index of data element currently stored. Initially 0.
unsigned addr;
// Optional storage for any user-supplied information, e.g. last access time of the element
unsigned info;
}
Cache::DataAccess is called for each element in a trace. It
invokes user-defined CacheAccess supplied in the contructor. CacheAccess
must
either find the accessed element in cache or load in the accessed element
(through CacheWrite). Either way, it must return the index
of a cache element, in which the accessed element is currently stored.
The implementation of Cache::DataAccess is as follows. Note
that you cannot change the implementation of any member functions of Cache
class,
including DataAccess.
void Cache::DataAccess(unsigned addr) {
unsigned cache_index = CacheAccess(this, addr);
assert(cache[cache_index].addr == addr);
cur_time ++;
}
Given a trace, you can measure the performance
of your adaptive cache in three steps.
-
Open the trace file. Construct a Cache object by passing in
its size and function CacheAccess
-
For each access i in the trace, call Cache::DataAccess(i).
-
At the end of trace, call Cache::Report().
Your CacheAccess function must manage cache placement, search, and
replacement completely. The performance of cache is determined by
how well cache is managed by your function.
The following example is a function that
implements a direct-mapped cache with 16 cache elements. Here
is an example program using the function.
unsigned CacheAccess(Cache *cache, unsigned addr) {
unsigned index = addr & 0xf;
CacheElem elem = cache->CacheRead(index, 0);
if (elem.addr != addr) cache->CacheWrite(index, addr, 0);
return index;
}
Phase I
-
Phase I has two parts. The second part is trivial but the first
part is quite the opposite. You need efficient design to handle large
traces in the first part.
-
Design an algorithm and implement it to measure the minimal number of misses
needed for each trace using three caches sizes, which are the largest power
of two that are no greater than 1/10, 1/4, and 1/2 of data size.
Take the integer part if the size is not an integer. Only for this part,
you do not need to use the provided Cache module. Submit the
number of misses for each of cache size for each trace.
-
Using the standard Cache module, write a CacheAccess function
that implements a 2-way set associative cache with LRU (least-recently
used) replacement policy. The cache sizes are the largest power of
two that are no greater than 1/10, 1/4, and 1/2 of data size. Submit
the miss rate AND the average search time as reported by the Cache
module.
Phase II
-
For each trace running on each cache size (1/10, 1/4, and 1/2), find a
CacheAccess
function that gives low miss rate and low search time. The requirement
for CacheAccess functions is below. The basic idea is
that the function cannot store much information (e.g. the trace) and must
be efficient (i.e. no division and modulo operations). What you can
do is to write a separate, off-line tool that experiments with different
hash functions on trace data and selects the best-performing function for
each trace and each cache size.
-
Requirements for CacheAccess function
-
Limited storage space
-
It can use the info field of cache elements.
-
It can use no more than 100 scalar (not an array) variables.
-
It can use array data. The combined size of all array data
can not be more than 1/10 of cache size.
-
Data in CacheAccess can be preloaded for a trace or a cache size
before execution.
-
Limited code size
-
The source file of the function can be no more than 4KB in size.
-
No expensive operations
-
It can not use division and modulo operations.
-
Turn-in requirements
-
A file titled "REPORT" that precisely describes your method for finding
the adaptive caching function for each trace and each cache size.
I want you to report on failed attempts as well, since negative results
are also part of research finding. You should also provide an analysis
of your results. The quality of the work is determined by all three
components: the product, the effort, and the understanding.
-
For cross-comparison, let your system generates CacheAccess functions
that minimizes
-
the miss rate and then the search time for that miss rate
-
the equation (miss_rate + search_time/30)
-
Submit all program files that are necessary to verify your results, with
a user manual that gives an overview of your programs and instructions
on compiling and running them.
-
Reporting of intermediate results
-
I will maintain a best-result page listing the groups that produce the
best caching result for each trace and each cache size. Although
not a mandatory requirement, I encourage you to let me know when your group
obtains a better result than some listed one, and I will update the best-result
page. I will display the best-result page at the beginning of the
next three classes until the project is over.