We have addressed, and are in the process of addressing, issues such as efficient communication support for both regular and irregular memory access patterns, dynamically balancing load and locality in autonomous cluster environments, and the applicability of the compiler/runtime interface to other (hardware-based) distributed shared memory architectures.
An autonomous cluster is one in which each node runs its own copy of the operating system, with processes independently scheduled and managed. Such an environment is an important focus for evaluation and optimization because of its ready availability in most organizations. Its advantages include low cost, the opportunity for partial upgrades, easy management because of readily available hardware and software, and natural fault containment due to the use of independent operation systems. However, parallel applications running in such an environment can suffer from load imbalance (resulting in poor resource utilization) caused by any one of several factors: 1) unequal load (computation or communication) assignment to equally-powerful compute nodes, 2) unequal resources (processor, memory, or network bandwidth or latency --- heterogeneity) at each compute node, and 3) multiprogramming. These load imbalances result in idle waiting time on cooperating processes that need to synchronize or communicate data. Additional waiting time may result due to local scheduling decisions in a multiprogrammed environment resulting in dependency-induced delays or due to extra communication because of poor locality.
We have developed a comprehensive solution to the above problem domain by using a combined approach of compile-time analysis, run-time load distribution, and operating system scheduler cooperation for improved utilization of available resources in an autonomous cluster. The operating system scheduler is modified to implement fair cooperative scheduling (FCS), which guarantees fairness among processes on a node while allowing communicating processes to coordinate their scheduling and allocate their time according to their needs. A locality-aware dynamic loadbalancing environment (LADLE) facilitates the communication of resource requirements between the parallel application and the underlying operating system. LADLE uses application-level or compiler-extracted access information to make locality-aware load redistribution decisions, and interacts with FCS to coordinate the scheduling of the parallel application's processes. We have shown that the resulting system provides better performance than when the loaded resources are not used at all, and significantly improve utilization while still guaranteeing fairness.