Kai Shen Ming Zhong Chuanpeng Li
Department of Computer Science, University of Rochester
{kshen, zhong, cli}@cs.rochester.edu
Our approach helps us quickly identify four performance bugs in the I/O system of the recent Linux 2.6.10 kernel (one in the file system prefetching, two in the anticipatory I/O scheduler, and one in the elevator I/O scheduler). Our experiments with two Web server benchmarks, a trace-driven index searching server, and the TPC-C database benchmark show that the corrected kernel improves system throughput by up to five-fold compared with the original kernel (averaging 6%, 32%, 39%, and 16% for the four server workloads).
It is not uncommon for complex systems to perform worse than expected. In the context of this paper, we define performance bugs as problems in system implementation that degrade the performance (compared with that intended by the design protocol/algorithm). Examples of such bugs include overly-simplified implementations, mis-management of special cases, or plain erroneous coding. These bugs, upon discovery, are typically quite easy to fix in comparison with implementing newer and better protocol/algorithms. However, it is challenging to identify performance problems and pinpoint their root causes in large software systems.
Previous techniques such as program instrumentation [13,20], complete system simulation [24], performance assertion checking [22], and detailed overhead categorization [9] were proposed to understand performance problems in complex computer systems and applications. Some recent performance debugging work employs statistical analysis of online system traces [1,7] to identify faulty components in large systems. In general, these techniques focus on offering fine-grained examination of the target system/application in specific execution settings. However, many systems (such as the I/O system in OS) are designed to support wide ranges of workload conditions and they may also be configured in various different ways. It is desirable to explore performance anomalies over a comprehensive universe of execution settings for these systems. Such exploration is particularly useful for performance debugging without the knowledge of runtime workload conditions and system configurations.
We propose a new approach that systematically characterizes performance anomalies in a system to aid performance debugging. The key advantage is that we can comprehensively consider wide ranges of workload conditions and system configurations. Our approach proceeds in the following steps (shown in Figure 1).
The result of our approach contains profiles for potential performance bugs, each with a system component where the bug is likely located and the settings (workload conditions and system configurations) where it would inflict significant performance losses. Such result then assists further human debugging. It also helps verifying or explaining bugs after they are discovered. Even if some bugs could not be immediately fixed, our anomaly characterization identifies workload conditions and system configurations that should be avoided if possible.
Note that discrepancies between measured system performance and model prediction can also be caused by errors in the performance model. Therefore, we must examine both the performance model and the system implementation when presented with a bug profile. Since the performance model is much less complex in nature, we focus on debugging the system implementation in this paper.
It is possible for our approach to have false positives (producing characterizations that do not correspond to any real bugs) and false negatives (missing some bugs in the output). As a debugging aid where human screening is available, false positives are less of a concern. In order to achieve low false negatives, we sample wide ranges of workload parameters and various system configurations in a systematic fashion.
The rest of this paper presents our approach in details and describes our experience of discovering operating system performance bugs when supporting disk I/O-intensive online servers. Although our results in this paper focus on one target system and one type of workloads, we believe that the proposed model-driven anomaly characterization approach is general. It may assist the performance debugging of other systems and workloads as long as comprehensive performance models can be built for them.
The targeted workloads in this work are data-intensive online servers that access large disk-resident datasets while serving multiple clients simultaneously. Examples include Web servers hosting large datasets and keyword search engines that support interactive search on terabytes of indexed Web pages. In these servers, each incoming request is serviced by a request handler which can be a thread in a multi-threaded server or a series of event handlers in an event-driven server. The request handler repeatedly accesses disk data and consumes CPU before completion. A request handler may block if the needed resource is unavailable. While request handlers consume both disk I/O and CPU resources, the overall server throughput is often dominated by I/O system performance when application data size far exceeds available server memory. For the ease of model construction in the next section, we assume that request handlers perform mostly read-only I/O when accessing disk-resident data. Many online services, such as Web server and index searching, do not involve any updates on hosted datasets.
Characteristics of the application workload may affect the performance of a disk I/O-intensive online server. For example, the data access locality and sequentiality largely determine how much of the disk time is spent on data transfer or seek and rotation.
We describe operating system features that affect the I/O performance of data-intensive online servers.
Prefetching. Data accesses belonging to a single request handler often exhibit strong locality due to semantic proximity. During concurrent execution, however, data access of one request handler can be frequently interrupted by other active request handlers in the server. This may severely affect I/O efficiency due to long disk seek and rotational delays. The employment of OS prefetching can partially alleviate this problem. A larger prefetching depth increases the granularity of I/O requests, and consequently yields less frequent disk seeks and rotations. On the other hand, kernel-level prefetching may retrieve unneeded data due to the lack of knowledge on how much data is desired by the application. Such a waste tends to be magnified by aggressive prefetching policies.
I/O scheduling. Traditional elevator-style I/O schedulers such as Cyclic-SCAN sort and merge outstanding I/O requests to reduce the seek distance on storage devices. In addition, the anticipatory I/O scheduling [14] can be particularly effective for concurrent I/O workloads. At the completion of an I/O request, the anticipatory disk scheduler may choose to keep the disk idle for a short period of time even when there are pending requests. The scheduler does so in anticipation of a new I/O request from the same process that issued the just completed request, which often requires little or no seeking from the current disk head location. However, anticipatory scheduling may not be effective when substantial think time exists between consecutive I/O requests. The anticipation may also be rendered ineffective when a request handler has to perform interleaving synchronous I/O that does not exhibit strong locality. Such a situation arises when a request handler simultaneously accesses multiple data streams.
Others. For data-intensive workloads, memory caching is effective in improving the application-perceived performance over the raw storage I/O throughput. Most operating systems employ LRU-style policies to manage data cached in memory.
File system implementation issues such as file layout can also affect the system performance. We assume the file data is laid out contiguously on the storage. This is a reasonable assumption since the OS often tries to allocate file data contiguously on creation and the dataset is unchanged under our targeted read-only workloads.
Our model-driven performance debugging requires model-based prediction of the overall system performance under wide ranges of workload conditions and various system configurations. Previous studies have recognized the importance of constructing I/O system performance models. Various analytical and simulation models have been constructed for disk drives [5,16,25,28,36], disk arrays [8,33], OS prefetching [6,29,31], and memory caching [15]. However, performance models for individual system components do not capture the inter-dependence of different components and consequently they may not accurately predict the overall application performance.
When modeling a complex system like ours, we follow the methodology of decomposing it into weakly coupled subcomponents. More specifically, we divide our whole-system I/O throughput model into four layers -- OS caching, prefetching, OS-level I/O scheduling, and the storage device. Every layer may transform its input workload to a new workload imposed on the lower layer. For example, I/O scheduling may alter inter-request I/O seek distances. Each layer may also change the predicted I/O throughput from the lower layer due to additional benefits or costs it may induce. For instance, prefetching adds the potential overhead of fetching unneeded data. As indicated in Figure 2, we use , , , and to denote the original and transformed workloads at each layer. We similarly use , , , and to represent the I/O throughput results seen at each layer.
Figure 2 illustrates our layered system model on I/O throughput. This paper focuses on the I/O system performance debugging and we bypass the OS caching model in our study. For the purpose of comparing our performance model with real system measurement, we add additional code in the operating system to disable the caching. More information on this is provided in Section 4.1. The rest of this section illustrates the other three layers of the I/O throughput model in detail. While mostly applicable to many general-purpose OSes, our model more closely follows the target system of our debugging work -- the Linux 2.6 kernel.
We define a sequential access stream as a group of spatially contiguous data items that are accessed by a single request handler. Note that the request handler may not continuously access the entire stream at once. In other words, it may perform interleaving I/O that does not belong to the same stream. We further define a sequential access run as a portion of a sequential access stream that does not have such interleaving I/O. Figure 3 illustrates these two concepts. All read accesses from request handlers are assumed to be synchronous.
We consider the workload transformation of I/O prefetching on a sequential access stream of length . I/O prefetching groups data accesses of the stream into requests of size -- the I/O prefetching depth. Therefore, the number of I/O requests for serving this sequential stream is:
Operating system prefetching may retrieve unneeded data due to the lack of knowledge on how much data is desired by the application. In the transformed workload, the total amount of fetched data for the stream is:
(3) |
However, wasted prefetching does not exist when each sequential access stream references a whole file since the OS would not prefetch beyond the end of a file. In this case, I/O prefetching does not fetch unneeded data and it does not change the I/O throughput. Therefore:
(4) |
(5) |
The I/O scheduling layer passes the retrieved data to the upper layer without any change. Therefore it does not change the I/O throughput:
(6) |
I/O scheduling transforms the workload primarily by sorting and merging I/O requests to reduce the seek distance on storage devices. We discuss such workload transformation by the traditional elevator-style I/O scheduling and by the anticipatory I/O scheduling.
I/O scheduling algorithms such as Cyclic-SCAN reorder outstanding I/O requests based on data location and schedule the I/O request close to the current disk head location. The effectiveness of such scheduling is affected by the concurrency of the online server. Specifically, a smaller average seek distance can be attained at higher server concurrency when the disk scheduler can choose from more concurrent requests for seek reduction. We estimate that the number of simultaneous disk seek requests in the SCAN queue is equal to the server concurrency level . When the disk scheduler can choose from requests at uniformly random disk locations, a previous study [27] indicates that the inter-request seek distance follows the following distribution:
During concurrent execution (concurrency greater than one), the I/O scheduler switches to a different stream when a prefetching request from one stream is completed. Therefore it does not change the granularity of I/O requests passed from the prefetching layer. Consequently the average size of an I/O request is:
(8) |
At the concurrency of one, all I/O requests belonging to one sequential access run is merged:
(9) |
During concurrent execution, the anticipatory I/O scheduling [14] may temporarily idle the disk so that consecutive I/O requests that belong to the same request handler are serviced without interruption. This effectively merges all prefetching requests of each sequential access run (defined in Section 3.1) into a single I/O request. Thus the average size of an I/O request in the transformed workload is:
(10) |
The other effect of the anticipatory I/O scheduling is that it induces disk idle time during anticipatory waiting when useful work could be otherwise performed. The disk idle time for each I/O request is the total inter-request thinktime for the corresponding sequential access run.
Let the disk transfer rate be . Also let the seek time and rotational delay be and respectively. The disk resource consumption (in time) for processing a request of length includes a single seek, rotation, and the data transfer as well as the idle time:
Since is independent of , we have:
Therefore:
(13) |
Below we determine the average data transfer rate , the average rotation delay , and the average seek time . The sequential transfer rate depends on the data location (due to zoning on modern disks). With the knowledge of the data span on the disk and the histogram of data transfer rate at each disk location, we can then determine the average data transfer rate. We consider the average rotational delay as the mean rotational time between two random track locations (i.e., the time it takes the disk to spin half a revolution).
Earlier studies [25,28] have discovered that the seek time depends on the seek distance (distance to be traveled by the disk head) in the following way:
Combining the seek distance distribution in Equation (7) and the above Equation (14), we have the following cumulative probability distribution for the seek time:
Therefore, the expected average seek time is:
Disk drives are usually equipped with limited amount of cache. Due to its small size, its main usage is disk track prefetching while its caching effects are negligible for data-intensive applications with large working-set sizes. We do not consider such caching effects in our model.
For clarity, we list the definitions for all symbols used in the previous subsections (Table 1).
|
We summarize the interfaces to our performance model, which include the workload characteristics, operating system configuration, and storage device properties.
|
Based on the whole-system performance model for I/O-intensive online servers, this section describes our approach to acquire a representative set of anomalous workload and configuration settings. We also present techniques to cluster anomalous settings into groups likely attributed to individual bugs. We then characterize each of them with correlated system component and workload conditions. Although certain low-level techniques in our approach are specifically designed for our target system and workloads, we believe the general framework of our approach can also be used for performance debugging of other large software systems.
Performance anomalies (manifested by deviations of measurement results from the model-predicted performance) occur for several reasons. In addition to performance bugs in the implementation, measurement errors and model inaccuracies can also cause performance anomalies. Aside from significant modeling errors, anomalies caused by these other factors are usually small in magnitude. We screen out these factors by only counting the relatively large performance anomalies. Although this screening may also overlook some performance bugs, those that cause significant performance degradations would not be affected.
Performance anomalies may occur at many different workload conditions and system configurations. We consider each occurrence under one setting as a single point in the multi-dimensional space where each workload condition and system configuration parameter is represented by a dimension. For the rest of this paper, we call this multi-dimensional space simply as the parameter space. Our anomaly sampling proceeds in the following two steps. First, we choose a number of () sample settings from the parameter space in a uniformly random fashion. We then compare measured system performance with model prediction under these settings. Anomalous settings are those at which measured performance trails model prediction by at least a certain threshold.
We define the infliction zone of each performance bug as the union of settings in the parameter space at which the bug would inflict significant performance losses. By examining a uniformly random set of sample settings, our anomaly sampling approach can achieve the following property associated with false negatives (missing some bugs). For a bug whose infliction zone is proportion ( ) of the total parameter space, the probability for at least one of our random samples falls into the bug's infliction zone is . With a reasonably large , it is unlikely for our anomaly sampling to miss a performance bug that takes effects under a non-trivial set of workload conditions and system configurations.
We now describe the parameter space for our target workload and system. We first explore the dimensions representing workload properties and we will examine the system configuration dimensions next.
For each system component that is considered for debugging, we must include system configurations where the component is not activated. The two I/O schedulers are natural alternatives to each other. We augment the operating system to add an option to bypass the prefetching code. We do so by ignoring the readahead heuristics and issuing I/O requests only when data is synchronously demanded by the application. Since our performance model does not consider OS caching, we also add additional code in the operating system to disable the caching. We do so by simply overlooking the cached pages during I/O. Our changes are only a few hundred lines in the Linux 2.6.10 kernel.
Below are the specific dimensions in our parameter space that represent system configurations:
Our performance model in Section 3 can predict system performance at different prefetching sizes. However, varying the prefetching size is not useful for our purpose of performance debugging. We use the default maximum prefetching size (128KB for Linux 2.6.10) in our study.
Given a set of anomalous workload condition and system configuration settings, it is still hard to derive useful debugging information without a succinct characterization on the anomalous settings. Further, the system may contain multiple independent performance bugs and the aggregate characteristics of several bugs may be too confusing to be useful. This section presents an algorithm to cluster anomalous settings into groups likely attributed to individual bugs and characterize each cluster to guide performance debugging. At a high level, the anomaly sampling described in Section 4.1 precedes the clustering and characterization, which are then followed by the final human debugging. Ideally, each such action sequence can discover one performance bug and multiple bugs can be identified by iterating this action sequence multiple times.
It is quite common for infliction zones of multiple bugs to cross-intersect with each other. In other words, several bugs might inflict performance losses simultaneously at a single workload condition and system configuration. Classical clustering algorithms such as Expectation Maximization (EM) [10] and K-means [19] typically assume disjoint (or slightly overlapped) clusters and spherical Gaussian distribution for points in each cluster. Therefore they cannot be directly used to solve our problem.
To make our clustering problem more tractable, we assume that the infliction zone of each performance bug takes a hyper-rectangle-like shape in the parameter space. This means that if parameter settings and in the -dimensional parameter space are inflicted by a bug, then any parameter setting with
A bug's infliction zone takes a hyper-rectangle-like shape if it has a range of triggering settings on each parameter (workload property or system configuration) and the bug's performance effect is strongly correlated with the condition that all parameters fall into respective triggering ranges. When this assumption does not hold for a bug (i.e., its infliction zone does not follow a hyper-rectangle-like shape), our algorithm described below would identify a maximum hyper-rectangle encapsulated within the bug's infliction zone. This might still provide some useful bug characterization for subsequent human debugging.
To the best of our knowledge, the only known clustering algorithm that handles intersected hyper-rectangles is due to Pelleg and Moore [21]. However, their algorithm requires hyper-rectangles to have soft boundaries with Gaussian distributions and hence is not directly applicable to our case, where hyper-rectangles could have infinitely steeply diminishing borders.
We describe our algorithm that identifies and characterizes one dominant cluster from a set of anomalous settings. More specifically, our algorithm attempts to identify a hyper-rectangle in the parameter space that explores trade-off between two properties: 1) Most of the sample settings within the hyper-rectangle are anomalous settings; 2) The hyper-rectangle contains as many anomalous settings as possible. In our algorithm, property 1 is ensured by keeping the ratio of in the hyper-rectangle above a certain pre-defined threshold. Property 2 is addressed by greedily expanding the current hyper-rectangle in a way to maximize the number of anomalous settings contained in the expanded new hyper-rectangle. Algorithm 4.1 illustrates our method to discover a hyper-rectangle that tightly bounds the cluster of anomalous settings related to a dominant bug.
After the hyper-rectangle clustering, we characterize each cluster by simply projecting the hyper-rectangle onto each dimension of the parameter space. For each dimension (a workload property or a system configuration), we include the projected parameter value range into the characterization. For those dimensions at which the projections cover all possible parameter values, we consider them uncorrelated to the cluster and we do not include them in the cluster characterization.
The computation complexity of our algorithm is since the algorithm has three nested loops with at most iterations for each. In the innermost loop, the numbers of samples and anomalies within a hyper-rectangle are computed by brute-force checking of all sample settings (an complexity). Using pre-constructed orthogonal range trees [18], the complexity of the innermost loop can be improved to , where is the dimensionality of the parameter space and is the answer size. We use brute-force counting in our current implementation due to its simplicity and satisfactory performance on our dataset (no more than 1000 sample settings and less than 200 anomalies).
We describe our performance debugging of the Linux 2.6.10 kernel (released in December 2004) when supporting I/O-intensive online servers. We repeatedly perform anomaly sampling, clustering, characterization, and human debugging. After each round, we acquire an anomaly cluster characterization that corresponds to one likely bug. The characterization typically contains correlated system component and workload conditions, which hints at where and how to look for the bug. The human debugger has knowledge on the general structure of the OS source code and is familiar with a kernel tracing tool (LTT [37]). After each bug fix, we use the corrected kernel for the next round of anomaly sampling, clustering, characterization, and human debugging.
Our measurement uses a server equipped with dual 2.0GHz Xeon processors, 2GB memory, and an IBM 10KRPM SCSI drive (model "DTN036C1UCDY10"). We measure the disk drive properties as input to our performance model (shown in Figure 4). The Equation (14) parameters for this disk is =ms, =ms, =ms, =ms, and =. We choose 400 random workload and system configuration settings in the anomaly sampling. The anomaly threshold is set at 10% (i.e., those settings at which measured performance trails model prediction by at least 10% are considered as anomalous settings). The clustering threshold () in Algorithm 4.1 is set at 90%.
We describe our results below and we also report the debugging time at the end of this section. The first anomaly cluster characterization is:
The second anomaly cluster characterization is:
The third anomaly cluster characterization is:
The fourth anomaly cluster characterization is:
We show results on the effects of our bug fixes. Figure 5 shows the top 10% model/measurement errors of our anomaly sampling for the original Linux 2.6.10 kernel and after the accumulative bug fixes. The error is defined as `` ''. Results show that performance anomalies steadily decrease after each bug fix and no anomaly with 14% or larger error exists after all four bugs are fixed. Figure 6 illustrates all-sample comparison between model prediction and measured performance. Figure 6(A) shows results for the original Linux 2.6.10 where the system performs significantly worse than model prediction at many parameter settings. Figure 6(B) shows the results when all four bugs are fixed where the system performs close to model prediction at all parameter settings.
We experiment with real server workloads to demonstrate the performance benefits of our bug fixes. All measurements are conducted on servers each equipped with dual 2.0GHz Xeon processors, 2GB memory, and an IBM 10KRPM SCSI drive (as characterized in Figure 4). Each experiment involves a server and a load generation client. The client can adjust the number of simultaneous requests to control the server concurrency level.
We evaluate four server workloads in our study:
To better understand these workloads, we extract their characteristics through profiling. During profiling runs, we intercept relevant I/O system calls in the OS kernel, including open, close, read, write, and seek. We extract desired application characteristics after analyzing the system call traces collected during profiling runs. However, system call interception does not work well for memory mapped I/O used by the TPC-C database. In this case, we intercept device driver-level I/O traces and use them to infer the data access pattern of the workload. Table 3 lists some characteristics of the four server workloads. The stream statistics for TPC-C are for read streams only. Among the four workloads, we observe that media clips has long sequential access streams while SPECweb99 and TPC-C have relatively short streams. We also observe that the three workloads except the index searching have about one run per stream, which indicates that each request handler does not perform interleaving I/O when accessing a sequential stream.
|
Figure 7 illustrates the throughput of the four server workloads. For each workload, we show measured performance at different concurrency levels under the original Linux kernel and after various performance bug fixes. The elevator I/O scheduler is employed for SPECweb99 and media clips while the anticipatory I/O scheduler is used for index searching and TPC-C. Therefore bug fix #3 is only meaningful for SPECweb99 and media clips while fixes #2 and #4 are only useful for index searching and TPC-C. The I/O throughput results are those observed at the application level. They are acquired by instrumenting the server applications with statistics-collection code. We were not able to make such instrumentation for the MySQL database used by TPC-C so we only show the request throughput for this workload.
Suggested by the characterization of bug #1, Figure 7(B) and (C) confirm substantial performance improvement (around five-fold) of the bug fix at high execution concurrencies. We notice that its effect is not as obvious for SPECweb99 and TPC-C. This can also be explained by our characterization of bug #1 since these workloads do not have long enough sequential access streams. The other bug fixes provide moderate performance enhancement for workloads that they affect. The average improvement (over all affected workload conditions) is 6%, 13%, and 4% for bug fix #2, #3, and #4 respectively.
Aggregating the effects of all bug fixes, the average improvement (over all tested concurrencies) of the corrected kernel over the original kernel is 6%, 32%, 39%, and 16% for the four server workloads respectively.
Performance debugging. Earlier studies have proposed techniques such as program instrumentation (e.g., MemSpy [20] and Mtool [13]), complete system simulation (e.g., SimOS [24]), performance assertion checking [22], and detailed overhead categorization [9] to understand performance problems in computer systems and applications. These techniques focus on offering fine-grained examination of the target system/application in specific workload settings. Many of them are too expensive to be used for exploring wide ranges of workload conditions and system configurations. In comparison, our approach trades off detailed execution statistics at specific settings for comprehensive characterization of performance anomalies over wide ranges of workloads.
Recent performance debugging work employs statistical analysis of online system traces [1,7] to identify faulty components in complex systems. Such techniques are limited to reacting to anomalies under past and present operational environments and they cannot be used to debug a system before such operational conditions are known. Further, our approach can provide the additional information of correlated workload conditions with each potential performance bug, which is helpful to the debugging process.
Identifying non-performance bugs in complex systems. Several recent works investigated techniques to discover non-performance bugs in large software systems. Engler et al. detect potential bugs by identifying anomalous code that deviates from the common pattern [11]. Wang et al. discover erroneous system configuration settings by matching with a set of known correct configurations [34]. Li et al. employ data mining techniques to identify copy-paste and related bugs in operating system code [17]. However, performance-oriented debugging can be more challenging because many performance bugs are strongly connected with the code semantics and they often do not follow certain patterns. Further, performance bugs may not cause obvious mis-behaviors such as incorrect states or system crashes. Without an understanding on the expected performance (e.g., through the performance model that we built), it may not even be easy to tell the existence of performance anomalies in complex systems.
I/O system performance modeling. Our performance debugging approach requires the construction of a whole-system performance model for targeted I/O-intensive server workloads. A large body of previous studies have constructed various analytical and simulation models to examine the performance of storage and I/O systems, including those for disk drives [5,16,25,28,36], disk arrays [2,8,33], I/O scheduling algorithms [23,26,35], and I/O prefetching [6,29,31]. However, performance models for individual system components do not capture the interplay between different components. This paper presents a whole-system throughput model that considers the combined impact of the application characteristics and several relevant operating system components on the overall server performance.
Using system-level models to predict the performance of I/O-intensive workloads is not new. Ganger and Patt argued that the I/O subsystem model must consider the criticality of I/O requests, which is determined by application and OS behaviors [12]. Shriver et al. studied I/O system performance using a combined disk and OS prefetching model [29]. However, these models do not consider recently proposed I/O system features. In particular, we are not aware of any prior I/O system modeling work that considers the anticipatory I/O scheduling, which can significantly affect the performance of our targeted workloads.
This paper presents a new performance debugging approach for complex software systems using model-driven anomaly characterization. In our approach, we first construct a whole-system performance model according to the design protocol/algorithms of the target system. We then acquire a representative set of anomalous workload settings by comparing measured system performance with model prediction under a number of sample settings. We statistically cluster the anomalous settings into groups likely attributed to individual bugs and characterize them with specific system components and workload conditions. Compared with previous performance debugging techniques, the key advantage of our approach is that we can comprehensively characterize performance anomalies of a complex system under wide ranges of workload conditions and system configurations.
We employ our approach to quickly identify four performance bugs in the I/O system of the recent Linux 2.6.10 kernel. Our anomaly characterization provides hints on the likely system component each performance bug may be located at and workload conditions for the bug to inflict significant performance losses. Experimental results demonstrate substantial performance benefits of our bug fixes on four real server workloads.