CS 134

Understanding Spinlocks in OS/161

In kernel code, you'll often see code like

void
pan_fry(struct sausage *s)
{
        spinlock_acquire(&s->s_lock);
        ++(s->s_doneness);
        spinlock_release(&s->s_lock);
}

In this snippet, spinlock_acquire and spinlock_release are used to protect a critical section of code where we update how well cooked the sausage is. This is an example of using a spinlock to protect shared data from concurrent access by multiple threads.

  • Pig speaking

    Does OS/161 actually represent sausages in the kernel? I can't find sausage.c anywhere in the source code!

  • PinkRobot speaking

    No. That was just a non-specific placeholder example.

  • Pig speaking

    Seems pretty explicit to me.

  • Dog speaking

    Is it lunch time yet?

Concept and Purpose

A spinlock is a lock that causes the CPU core trying to acquire it to simply wait in a loop (“spin”) while repeatedly checking if the lock is available. Spinlocks are used to protect shared resources in a multiprocessor environment, particularly for short, non-blocking operations.

Key characteristics:

  • Very low overhead when uncontended
  • Locking and unlocking does not involve context switches
  • Can be used in interrupt handlers

Implementation in OS/161

In OS/161, spinlocks are implemented in kern/thread/spinlock.c and kern/include/spinlock.h.

Core Operations

void spinlock_init(struct spinlock *splk);
void spinlock_acquire(struct spinlock *splk);
void spinlock_release(struct spinlock *splk);
bool spinlock_do_i_hold(struct spinlock *splk);

All the spinlock operations take a pointer to a struct spinlock as an argument (very often via the address-of operator, &, as shown in the code snippet above).

Before using a spinlock, you must initialize it with spinlock_init. The spinlock_acquire and spinlock_release functions are used to acquire and release the lock, respectively. The spinlock_do_i_hold function is used to check if the current CPU holds the lock.

Internal Data Structure

You mostly don't need to care about the internal structure of a spinlock, but for reference, here's the definition from kern/include/spinlock.h:

struct spinlock {
        volatile spinlock_data_t splk_lock; /* Memory word where we spin. */
        struct cpu *splk_holder;            /* CPU holding this lock. */
        HANGMAN_LOCKABLE(splk_hangman);     /* Deadlock detector hook. */
};

(The “Hangman” deadlock detector is described here.)

Interaction with Interrupts

A crucial aspect of spinlocks in OS/161 is their interaction with interrupts. When a spinlock is acquired, interrupts are disabled on the current CPU.

Disabling interrupts

  1. Prevents Deadlocks: If an interrupt handler tries to acquire a spinlock already held by the interrupted code, it would deadlock.
  2. Ensures Atomicity: Disabling interrupts ensures that the critical section protected by the spinlock is truly atomic.
  3. Avoids Priority Inversion: Prevents a low-priority interrupt from interrupting high-priority code holding a spinlock.

In the OS/161 implementation, you can see interrupt disabling in spinlock_acquire in kern/thread/spinlock.c,

void
spinlock_acquire(struct spinlock *splk)
{
    splraise(IPL_NONE, IPL_HIGH);
    // ... (atomically acquire the lock) ...
}

and reenabling interrupts in spinlock_release,

void
spinlock_release(struct spinlock *splk)
{
    // ... (release the lock) ...
    spllower(IPL_HIGH, IPL_NONE);
}

Spinlocks vs. Mutexes (Locks)

Understanding when to use spinlocks vs. mutexes (implemented as sleep locks in OS/161) is crucial:

Aspect Spinlocks Mutexes (Locks)
Duration Short Potentially long
Context switches No Yes
Interrupt safety Safe Unsafe
Blocking No (spins) Yes (sleeps)
Use in interrupt handlers Yes No

Use spinlocks when

  • The critical section is very short
  • You need to synchronize interrupt handlers
  • You're implementing other synchronization primitives

Use mutexes when

  • The critical section may be long
  • Blocking/sleeping is acceptable
  • In most general-purpose synchronization scenarios

Performance and Design Considerations

  1. Keep Critical Sections Short: Long-held spinlocks can waste CPU cycles and increase latency.
  2. Avoid Nested Spinlocks: Complex nesting is harder to think about and can lead to deadlocks if not designed carefully. Whenever possible, keep things simple.
  3. Don't Sleep While Holding: Never sleep or yield while holding a spinlock.
    • Specifically, don't call P, mutex_lock or thread_yield while holding a spinlock.
    • In contrast, the low-level wait-channel functions expect the caller to hold the spinlock (and provide a pointer to the spinlock to release it while sleeping), so even these functions don't actually sleep while holding the spinlock.
  4. Minimize Interrupt-Disabled Time: Since spinlocks disable interrupts, use them judiciously.
  5. Consider Multiprocessor Implications: Spinlocks are crucial for multiprocessor synchronization but can impact scalability if overused.

Common Pitfalls and Best Practices

  1. Double Acquire: Trying to acquire a spinlock that is already held by the current thread will deadlock. The OS/161 spinlock implementation will panic in this case.
  2. Forgetting to Release: Always ensure spinlocks are released, even in error paths.
  3. Using in User Space: Spinlocks are for kernel-space only.
  4. Holding Across Context Switches: Never hold a spinlock across a context switch or when sleeping.
  5. Excessive Spinning: In high-contention scenarios, consider using a different synchronization mechanism.

Summary

Spinlocks are a fundamental synchronization primitive in OS/161, used to protect short, non-blocking critical sections. OS/161's spinlocks prevent threads on other CPUs from entering the critical section and interrupts on this CPU from interrupting the critical section. Understanding when to use spinlocks vs. mutexes is crucial for writing correct and performant kernel code.

(When logged in, completion status appears here.)