Projects
It has become increasingly challenging to develop
correct, high performance, and reliable software. Approximately,
10 new bugs are reported every day for popular applications we
have come to rely upon everyday (e.g., Linux kernel, apache
webserver, Firefox browser). The overall goal of my research is
to exploit Moore's law and develop new hardware mechanisms that
lead to better tools and programming models. The basic idea is to
use hardware to expose information that enables a program to
understand its own execution and react to it. This will help
developers build correct and reliable software. Specifically my
dissertation research focuses on exposing the information present
in the memory subsystem. It also develops mechanisms that allows
software to control the information flow in the system with a
higher level specification (e.g., transactions) rather than
through low-level operations like loads and stores. A key feature
of all these mechanisms is that they provide support for
fine-grain, cache block granularity (typically 10s of bytes)
memory regions, which enables software to easily relate them to
program-level variables. Here I discuss each mechanism and
desribe their design. The sections are organized in chronological
order of earliest work first.
Intra-Process Protection
Fine-grain intra- and inter-thread interactions via memory make it difficult for developers to track, debug, and validate the accesses arising from the various software modules. As one example, figure presents a high-level representation of the developers' view of Apache at design time --- we use the following notations Ms indicate modules, Ds data elements, dashed interface between modules, and tuple D: (M:P) indicates module M has permission P on the memory location D. For the sake of programming simplicity and performance, current implementations of Apache run all modules in a single process and rely on adherence to the module API to enforce protection. A bug or safety problem in any module could potentially (and does) affect the whole application.
In my ISCA'10 submission, I propose
From the software's perspective, Sentry is a pluggable access
control mechanism for application-level memory watching and
protection. It works as a supplement to OS process-based protection
and incurs space and time overhead only when additional
intra-application protections are needed. The software runtime that
manages the intra-application protections can reside entirely at the
user level. We used Sentry to enforce a protection model for an
Apache web server to safeguard the core web server's data from
extension modules by ensuring that modules don't violate the library
interface. We achieved this without requiring any changes to the
programming model, with minimal source code annotations, and minimal
performance overheads ($\simeq$13\%). We also validate the suitability
of Sentry when employed for a watchpoint-based memory debugger.
Fine-grain Monitoring
Program analysis tools often require more detailed or fine grain
information or control, resulting in high-overhead intrusive
routines to collect information about accesses (e.g., what
locations are in the cache) and threads with current support. I
proposed hardware support to expose the data movement to software
and reduce the overhead associated with tracking in
software. Interestingly, the bulk of the support required by
monitoring already exists --- the memory system is a network of
caches across which hardware implicitly moves the data to satisfy
the accesses issued by software. It is only required that we notify
software about the hardware events triggered by an access. We
propose two schemes, Alert-On-Update (AOU) and Dependence Summary
Counters (DSCs).
Alert-On-Update : AOU is a lightweight mechanism that permits
software to request notification about cache events. When the
hardware observes activity on any tracked location, it invokes a
handler and provides information about the the event. Since
software controls the use and reaction to the event, one can imagine
relating the event information to software semantics in various
ways. AOU only requires the addition of a single bit per cache-line.
At TRANSACT'05 and ISCA'07, we demonstrated that AOU is sufficient
to significantly speedup software-based transactional memory. Subsequently, we have used it to develop new programmer-friendly reader-writer
locks [TRANSACT'09], detect atomicity bugs [TR945] and debugger
watchpoints.
Dependence Summary Counters (DSC) : Modern processors have
many performance counters (e.g., L1 misses), but none is able to
provide information about shared data. I proposed a set of counters
at every processor that enumerate the data communication with other
processors. On a coherence request the cache controller uses the id
of the requesting processor and type of request (e.g., read-only,
read/write) to increment the corresponding counter. We demonstrated
the use of these counters to detect concurrent accesses from
different threads in a TM system[ISCA'08]. The
thread scheduler can also use such information to map communicating
threads to nearby processors[TR945].
Data Isolation
Isolation refers to the ability to hide modifications from certain parts of program and then expose or revoke the changes in bulk based on software semantics. The classical use of isolation has been sandboxing and transactions. In sandboxing, an application uses isolation to ensure a buggy or insecure software plugin doesn't damage the integrity of the rest of the application. Transactions use isolation to ensure that concurrent speculative tasks don't see any intermediate inconsistent state. Isolation mainly requires support for versioning a location, i.e., buffering new modifications until they are committed while also maintaining current values in case the modifications are redacted.
I developed a new coherence protocol, TMESI , that implements isolation by converting the multiple levels of caches in the memory system to hold different versions of a memory block. It buffers the new values in the private cache levels close to the processor and move old values to the shared cache levels. This scheme supports low overhead commit and revocation of isolated data. A noteworthy feature of this design is that it permits multiple new versions of a location, allowing different software tasks to isolate the same location concurrently. I have developed complete (stable and transient states) snoopy [ISCA'07] and directory protocols [ISCA'08]. Our group used this isolation mechanism to demonstrate that a hardware-based optimistic transactional memory system can be realized within a traditional memory system framework, without any centralized arbitration.