Due on Sunday, March 23. This assignment will be managed by the TA --- Konstantinos (Kostas) Menychtas. Your completed assignment should be submitted to the TA. Most of the assignment description below is authored by the TA as well. When you encounter questions with this assignment, please contact the TA first and contact the instructor later if your questions cannot be resolved with the TA.
In addition to the stated purpose of the assignment (studying delayed writes), another purpose is to re-familiarize you with the Linux development environment. If you choose a Linux-related final course project, you may need to work on this environment.
In a delayed write, the data to be written is buffered (delayed) in memory for a period of time before being sent to the external storage device. Delayed write is a common feature in operating systems to improve the performance of write I/O. However, delayed writes may result in a loss of writes after system failures, or a violation of data durability. In this assignment, you are asked to experiment with different configurations of delayed writes and understand its implication on performance and data durability.
Delayed writes in Linux 2.6.10:
Linux employs some background kernel threads called
pdflush to clean up page-cache and writeback dirty pages to disk.
There are two important factors to decide when to writeback
dirty pages: 1) How long has a dirty page been in page-cache?
2) How much aggregate space do dirty pages occupy in page-cache?
These factors are controlled by several configurable parameters in
the /proc
file system interface. Specifically:
/proc/sys/vm/dirty_writeback_centisecs
(default 500): In hundreds of a second, this is how often
the pdflush background threads wake up to examine dirty data
for possible writebacks.
/proc/sys/vm/dirty_expire_centiseconds
(default 3000): In hundredths of a second, how long a page can stay
dirty in the page-cache before it must be
written back at the next opportunity when a pdflush thread runs.
/proc/sys/vm/dirty_background_ratio
(default 10): Maximum percentage of active (or recently referenced)
memory pages that can be filled with dirty data before the
background pdflush threads begin writeback.
/proc/sys/vm/dirty_ratio
(default 40): Maximum percentage of memory pages that can be filled
with dirty data before processes must write data synchronously on
the foreground.
dirty_background_ratio
and dirty_ratio
to zero is a good way for forcing
immediate dirty page writebacks (effectively write-through).
In addition to these control parameters, you may also find it useful
to monitor some relevant system statistics in the /proc
file system.
For instance, the file /proc/meminfo
(reference) contains
page-cache related statistics. Some fields of interests are
Cached (size of the page-cache), Dirty (size of current
dirty pages), and Writeback (size of pages being currently
written to disk).
Symmetric statistics (in page number rather than in size) are maintained
in /proc/vmstat
.
Also in /proc/diskstats
(reference),
you can find statistics about reads/writes at the I/O device level.
Finally, this reference
may be a useful source for general information on the /proc
file system.
Note that here we only provide an incomplete description of the Linux
dirty data management that gives you a basic understanding for this
assignment. Additional topics on Linux dirty data management that
are not discussed here include:
1) dirty data writebacks as result of normal page replacement under
memory pressure; 2) application-initiated synchronous writes through
system calls such as fsync()
.
You may look into these issues if you are interested.
Task #1: evaluation of delayed writes
You are asked to measure the performance and data durability
due to delayed writes under different configurations.
Specifically, you should adjust some configurable parameters on
delayed write aggressiveness in /proc/sys/vm/
, as
mentioned earlier. You can choose which parameters to adjust and how
to adjust them under your own discretion.
At each delayed write configuration, you should report the following:
/proc/diskstats
.For the purpose of measurement, you need to run some write-intensive workload. Please first run a simple provided benchmark --- simple.c. Yes, you need to copy the source and compile it before running. This benchmark checks two disk access patterns: Sequential (s) and Random (r).
./simple r 1000000 100
means --- run the random access pattern for 100 seconds; doing random
writes of CHUNK bytes in file of FILESIZE bytes every 1 second
(1000000 microseconds); CHUNK and FILESIZE are macros defined in
the source. ./simple s 1000000 100
means --- run the sequential
access pattern for 100 seconds; repeatedly writing CHUNK bytes and
skipping SKIP bytes in a file of FILESIZE
bytes every 1 second (1000000 microseconds); SKIP is another macro
defined in the source./proc
filesystem statistics, probably with a script, should be your primary
approach.
Beyond this simple workload, you are encouraged (but not required) to evaluate delayed writes with some additional write-intensive workloads. Some simple-to-setup I/O benchmarks can be found at this reference.
Task #2: specific benefits of delayed writes
One primary performance advantage of delayed writes is
write cancellation
--- repetitive writes on the same data may be coalesced into a single
device-level write I/O operation. In addition to write cancellation,
explain some additional performance advantage(s) of delayed writes.
Design simple workload(s) to demonstrate such performance advantage(s)
at different delayed write configurations.
Also show the amount of dirty date in memory at each configuration, by
following what you did for Task #1.
For the purpose of demonstrating additional performance advantage(s) beyond write cancellation, your workload(s) must possess no pattern of repetitive writes on the same memory page.
You will be experimenting on a Xen/Linux virtual machine hosted at
canberra.cs.rochester.edu. We will mail you access codes (username and
password) for access to canberra.
The default root password for the pre-initialized virtual machine domain
can be found in the file /kernel/readme
on canberra.
We believe all of you have worked with this environment in the Operating
System course. If you need some reminder on how to connect and setup your
Xen/Linux environment, please refer to
this page.
We set up the Xen/Linux environment for this assignment due to three reasons: 1) you need root privilege for this assignment; 2) some of you might want to tweak the kernel for reasons we cannot foresee at this point; 3) re-familiarize yourself with the Linux development environment in case you choose a Linux-related final course project. The virtualization environment, however, brings some additional complexity for I/O processing. We expect such complexity does not significantly affect the stated goals of this assignment. If you find otherwise or if you are interested in exploring such virtualization-related I/O complexity, please let us know and we will be glad to discuss with you.
Turn-in:
You are asked to turn in a written report. The report should describe
the results of your evaluation (graphical illustrations are
strongly desired).
Describe important implications of your results and any further thoughts
you developed through this process. Do us a favor not to turn in a
hand-written report unless you are sure your handwriting is perfectly
recognizable. Your report should also indicate the location of your
designed workload(s) for Task #2 in the virtual machine domain. We may
want to look at their sources in the grading.