Deadlock Avoidance
Meh. I've got an idea how to avoid deadlock. Just never do anything concurrently! Boom—problem solved!
Actually, you're right. If we only ever run one process at a time, we'll never have deadlock.
But that's a terrible solution! Talk about a sledgehammer to crack a nut!
But it does give us an idea…
So, on the previous page, we learned that deadlock prevention is about the programmer writing their code to prevent deadlock situations. Now we're going to learn about deadlock avoidance, where the system avoids deadlock by being more careful about when to allocate resources, making processes sometimes wait longer.
This prevention vs. avoidance thing is going to spin my brain. I know I'm going to get them muddled up.
Let's talk some more about avoidance, and maybe it will be clearer later on.
Meh. You're just “avoiding” the issue. Classic delay tactic.
Right, again, Goat. That's avoidance!
Whoa.
As we've said, the most conservative resource-allocation policy would be to only let one process run at a time. But there is a better way: avoiding getting to an unsafe state where we haven't actually deadlocked yet, but if things took a wrong turn, we would.
To avoid deadlock, we need processes to tell us something new. We need them to indicate which resources they might need in the future. In the literature, these are often (confusingly) called “claimed” resources, but we'll describe it as resources they're “interested in”. Processes can change the set of resources they're interested in, but once they start requesting resources, they can't change their minds until they've released all the resources they've already been allocated.
The simulation below sets up three processes and three resources, where each process is interested in two of the resources. At the start, no one has requested or been allocated any resources yet. You can play around with the simulation, but reset it to the initial state by clicking the reset button before you continue on to the exercises below.
Simulator and Visualization
Deadlock Status:
Exercise: A Dangerous Allocation
In the simulator above, make the following allocations.
- Give Pan to Alice.
- The deadlock status should say “No deadlock detected”.
- Give Whisk to Bob.
- The deadlock status should still say “No deadlock detected”.
- Give Bowl to Charlie.
- The deadlock status will now say “Deadlock possible but not currently present.”
Avoiding Deadlock
I think I see the rule. If giving a resource to a process would create a cycle in the graph, we can't do it yet. They have to wait until they can get all the resources they need without creating a cycle.
But how do we know they'll ever get all the resources they need?
In the worst case, they'll have to wait, but as other processes release resources, they'll eventually get what they need.
So no starvation?
Not if we have a nicely written implementation of the algorithm that tries to prefer processes that have been waiting a long time.
This algorithm is known as the resource-allocation graph algorithm. It operates as follows:
- When a process requests a resource, it must wait if
- The resource is held by another process, or
- Allowing the request would create a cycle in the resource-allocation graph (i.e., the system tentatively adds the request edge to the graph and checks for a cycle, and if there is a cycle, the system removes the edge and makes the process wait).
- Whenever a resource that the waiting process is interested in is released by another process, we retry to see if the request can be satisfied safely.
This algorithm avoids deadlock by ensuring that the system never gets into a state where deadlock is unavoidable.
(Note that our simulator doesn't quite implement the resource-allocation graph algorithm because it shows you when the system is in an unsafe state, rather than stopping it from getting into an unsafe state.)
Optional Exercise: More Things
I always want MORE!
You can add more resources and processes to the simulation, and experiment more with the deadlock-avoidance algorithm if you like.
One thing to be aware of if you experiment with the simulator is that you must indicate all the resources a process will need before it starts requesting resources. If you forget, you'll need to release all the resources the process has been allocated before you can change the set of resources it's interested in.
More Advanced Deadlock Avoidance
Dijkstra's Banker's Algorithm is a more-advanced deadlock-avoidance algorithm that can be used to avoid deadlock in a system with multiple resources of each type. We won't go into the details of the Banker's Algorithm here because it's rarely used in practice. If you want to learn more about it, you can check out the Wikipedia page.
In Practice
Meh. So it's rarely used. Why did we even bother learning about it?
Sometimes it's good to be aware of the broader picture. This algorithm is a counterpoint to the deadlock-detection algorithm we saw on the previous page, showing that with a simple change we can not only detect deadlock but also avoid it.
(When logged in, completion status appears here.)