Notes for CSC 2/456, Monday 31 January 2000 Read chapter 7 ===================================== Deadlock Example X, Y, sem := 1,1 P1: P2: P(X) P(Y) P(Y) P(X) ... ... V(Y) V(X) V(X) V(Y) Definition A process is deadlocked when it is waiting for some event that will never occur. The event never occurs either because: (a) no other process was set up to trigger the event or (b) the process responsible for the event is also deadlocked. A set of processes is deadlocked when every process in the set is waiting for an event that can only be triggered by another process in the set. What are resources? physical processors pages devices communication channels virtual (kernel-implemented abstractions fixed in number) processes semaphores sockets ptys etc. There are 4 necessary conditions for deadlock: 1. mutual exclusion: at least one resource is non-sharable 2. hold and wait: at least one process is holding and waiting for resources 3. no preemption: resources cannot be taken away from the process that has them 4. circular wait: p0 waits for p1 which waits for p2... which waits for p0 ------------------------------ Resource Allocation Model Resource allocation graph: (a) resources are boxes (b) processes are circles (c) arc (pi,rj) means process i wants resource j (d) arc (rj,pi) means process i has resource j (e) if we allow more than one instance of a resource, we model each instance as a dot within a box and draw arcs from the resource instance to a process when a process holds that instance If no cycle exists in the resource allocation graph, there is no deadlock. If there is a cycle in the graph and each resource has only one instance, then there is deadlock. In this case, a cycle is a necessary and sufficient condition for deadlock. If there is a cycle in the graph, and each resource has more than one instance, there may or may not be deadlock. (A cycle may be broken if some process outside the cycle has a resource instance that can break the cycle.) Therefore, a cycle in the resource allocation graph is a necessary but not sufficient condition for deadlock, when resource instances are considered. Example: This graph has a cycle and is in deadlock: R1 -> P1 P1 -> R2 R2 -> P2 P2 -> R1 This graph has a cycle and is not in deadlock: (Resource 1 has one instance, R11) (Resource 2 has two instances, R21 and R22) R11 -> P1 P1 -> R21 R21 -> P2 R22 -> P3 P3 -> R11 If P2 finishes, P1 can get R21 and finish, so there is no deadlock. ------------------------------- Deadlock Prevention We can prevent deadlock by attacking one or more of the 4 conditions: 1. mutual exclusion cannot be prevented in general 2. hold and wait can be prevented by forcing all resources to be allocated together; a request is made only when no resources are held (caveat: low resource utilization. also: may not know what resources you're going to need until you've used some others for a while.) 3. no preemption can be prevented by taking resources from a process that waits (caveat: many physical devices can't/shouldn't be preempted, such as line printers; also, can be difficult to design a process so that it can recover from preemption) 4. prevent circular wait by imposing a total ordering on resources (such an ordering may already exist in a hierarchical system) The problem with prevention is that it limits the set of choices each program has for resource allocation in advance. ------------------------------- Deadlock Avoidance: the Banker's Algorithm Rather than remove one of the conditions for deadlock, a priori, we allow requests to be served until we spot a potential problem. Then, we limit requests. The banker's algorithm is the most liberal allocation policy guaranteed not to deadlock. (1) Before any process can be allocated resources, it must state (or claim) the maximum amount of resources from each resource class it will need at any one time. (2) No one process can claim more than the amount of physical resources. (3) The sum of all claims can be much greater than the amount of physical resources. (4) Use the stated claims and pending requests to determine if there is the potential for deadlock if a request is granted. An allocation state for a resource class (the state of all processes holding and requesting resources of that class) is realizable if: (i) no single claim is for more resources than available in the class (ii) no process holds more than it claimed (iii) the sum of the resources held is not more than the available resources A realizable state is "safe" if there exists a sequence of processes, P0...Pn, called a safe sequence, such that P0 can certainly finish execution (since we have enough free resources to grant to P0 its maximum claim), and Pi can finish if P0...P(i-1) finish and release their resources. Example: (assume a single resource class) Process A has 4 resources and claims a max of 6 Process B has 2 resources and claims a max of 7 Process C has 4 resources and claims a max of 11 There are 2 unallocated resources The current state is safe since (A,B,C) is a safe sequence. The banker's algorithm never grants a request if it causes the allocation state to become unsafe. Does this require that we check all n! sequences? No. If there exists some process A that can finish with the available resources and the state is safe, then there exists a safe sequence that begins with A. (Proof?) O(n^2) algorithm: P := {Pi}; while P <> {} do find A in P such that A can finish with available resources if no such A found then unsafe state else (pretend to) remove A from P; (pretend to) return A's resources to resource pool; end if; end while; state is safe; Note that the banker's algorithm, if necessary, will force processes to serialize requests, letting only the process that can finish continue to allocate resources. Also note that the banker's algorithm is CONSERVATIVE (like bankers!): not all unsafe states represent deadlocks; they simply represent the *potential* for deadlocks. ------------------------------- Deadlock Avoidance: Habermann's Version of the Banker's Algorithm (Very nice, fast, clean solution for systems with only one type of resource; not detailed in Tanenbaum) Suppose we have N instances of resource R. The resource manager maintains an array S[0..N-1] of integers. Initially, S[i] = N - i, for all i. The resource manager also knows how many resource instances have been claimed (ie, there was a statement of intention to use) by each process; these values are maintained in array C[1..P]. The number of resource instances held by each process is maintained in H[1..P]. when Pi requests 1 more resource instance do for j := 0 to C[Pi]-H[Pi]-1 do S[j] := S[j] - 1; if S[j] < 0 then reject allocation: state is unsafe restore S end; end; end; When making a request, a process decrements values in the S array for each resource instance it has claimed but not yet used. S[j] therefore represents the number of processes that could need to acquire at least j more instances of the resource in order to finish, and still have an overall safe state. Each time a resource is allocated, S[0] is decremented. Therefore, S[0] maintains a count of the available resources. S[0] can never be less than 0, since that would mean we have allocated more resources than we have available. S[N-1] is only decremented when a process making the maximum claim (N resource instances) requests its first resource instance (ie, it doesn't hold any resource instances). This can only happen once, since S[N-1] is initially 1 and is not allowed to become negative. Note that we have avoided potential deadlock states. If two processes each claim N resource instances and each are allowed to have 1 instance, they are already on the path to deadlock. Neither process will ever get all N resource instances! Our algorithm prevents this by not allowing S[N-1] to be decremented twice. Similarly, if two processes each claim N-1 resource instances, both can be allowed to allocate 1 resource instance (S[N-1] is not affected in this case and S[N-2] can be decremented twice). Only one of them would be allowed a second allocation, since S[N-3] starts at 3 and is decremented each time. This prevents a possible deadlock if each process held 2 of the resources. (Each would then need N-3 resources, but only N-4 are available.) In addition, a third process with a claim of N-1 resource instances would not be given an allocation (S[N-2] would already be 0). Returning to our example from above: Process A has 4 resources and claims a max of 6 Process B has 2 resources and claims a max of 7 Process C has 4 resources and claims a max of 11 There are 2 unallocated resources There are 12 resource instances. Initially, S = 12 11 10 9 8 7 6 5 4 3 2 1 After all the requests are granted S = 2 1 0 0 0 0 1 1 1 1 1 1. This state is safe because no value is negative. A can be given one more resource instance leading to the value S = 1 0 0 0 0 0 1 1 1 1 1 1. Note that B cannot be granted one more since several entries would then become negative. Note that it is often impractical to ask a process to stake a claim to the maximum amount of resources it will need. In these cases, the deadlock avoidance algorithms can't be used. ------------------------------- Deadlock Detection and Recovery Detection: Consider again the resource allocation graph. We can "reduce" the graph by process Pi if process Pi's request can be granted. We remove arrows to and from Pi from the graph. By performing graph reduction we eventually end up with no arrows in the graph or any remaining processes with arrows are in deadlock. Recovery: (i) Terminate (with prejudice) some process in the cycle, or some other (non-deadlocked) process that holds resources that would allow something in the cycle to continue. Preferably, choose a process whose actions are idempotent (repeatable without problems) -- things that update non-volatile state, or interact with real-world devices, are not good candidates. (ii) Temporary preemption (e.g. via manual intervention) (iii) Rollback a checkpointed process. ------------------------------- Two-phase locking Basically a rollback mechanism. Widely used in database applications. Only works when you can back processes up. Databases do it by buffering all updates and making them happen atomically once all is clear. Phase 1: proceed as you normally would, acquiring locks when you need them, but not releasing any, and making shadow copies of anything you modify so that you see your own changes but nobody else does. If you can't get a lock you need, throw away all the shadow copies, release all the locks, and start over. Phase 2: once you've done all the work you need to do to move the system as a whole from one consistent state to another (and you still hold all the locks you acquired in the process), write all your changes into the permanent, externally-visible state, and release all the locks. (Not to be confused with "two-phase commit", which is also a database concept, but designed to achieve atomicity in the presence of failures.)