Context switching

Required reading: proc.c (focus on scheduler() and sched()), swtch.S. Also process creation: sys_fork() and copyproc().

Overview

Big picture: more programs than CPUs. How to share the limited number of CPUs among the programs?

Idea: have the kernel transparently switch the CPU(s) among programs, giving them a "virtual CPU" abstraction.

Terminology: I'm going to call a switch from one program to another a context switch.

The program's view of context switching

For this discussion we're going to assume that the O/S runs each program in a "process", which couples an address space with a single thread of control. For the most part each process appears to have a dedicated CPU, but if processes compare notes they can see that the CPU is actually multiplexed, with this process-visible behavior:

The kernel also exposes system calls for handling processes; here are the main XV6 process system calls:

The kernel's view of context switching

As you know, the xv6 kernel maintains a kernel stack for each process, on which to run system calls for the process. At any given time, multiple processes may be executing system calls inside the kernel. This is a common kernel arrangement -- multiple threads of control in the kernel address space, one per user-level process. A kernel thread must be capable of suspending and giving the CPU away, because the implementation of many system calls involves blocking to wait for I/O. That's the reason why each xv6 process has its own kernel stack.

Terminology: if a process has entered the kernel, the kernel stack and CPU registers of the resulting thread of control is called the processes kernel thread.

In this arrangment, the kernel doesn't directly switch CPUs among processes. Instead, the kernel has a number of mechanisms which combine to switch among processes:

Processes can be in a number of states (see proc.h, 1526): UNUSED, EMBRYO, SLEEPING, RUNNABLE, RUNNING, ZOMBIE. Diagram of possible states (skip ZOMBIE for now) and transitions between them.

allocproc [1627], copyproc [1709], userinit [1757].

sys_fork [2807]. why save pid in a temp variable?

xv6 code examples

Let's watch what happens when a timer interrupt causes an involuntary context switch between two compute-bound user processes. The user process:
main(){
  fork();
  while(1)
    ;
}

Run bochs with a single CPU (bochs -q -f uni, then load-symbols "kernel.sym"). Break in yield() [1869] (b "yield", c), run spin in xv6, then print-stack 1. On kernel stack of one of the processes. The return EIP should be trap() [2592] (but might be alltraps; why?).

Why does yield() need to acquire proc_table_lock? When will the lock be released?

Why state = RUNNABLE? What was state's previous value?

yield() calls sched() which calls swtch() [2156]. b "swtch", s, c, print-stack 3. Look at definition of struct context [1515]: it's a set of saved registers. swtch() will save into one context, restore from another. What are the two contexts? x/8x the context addrs.

Where do we think swtch() will return to?

Why doesn't swtch() save EAX in struct context?

Why doesn't swtch() just leave the return EIP on the stack? Why does it copy the return EIP from the stack to the struct context, then back from the struct context to the stack?

At end of swtch(), what stack are we running on?

Where does print-stack show swtch() will return to? Why did this happen?

scheduler() [1808] is in the middle of a loop that checks to see whether each process would like to run (is RUNNABLE).

If scheduler() finds no RUNNABLE process, it finishes the loop, releases the lock, but then immediately re-acquires the lock at the start of the outer loop. Why release just to re-acquire?

That is, suppose for a while there are no RUNNABLE processes. What will the CPUs spend their time doing?

Why do we need to disable interrupts while holding locks in the kernel? For example, why do we disable interrupts when grabbing a lock, e.g. in scheduler? [acquire is 1375] or tickslock in sys_sleep, trap? [sys_sleep 2864, trap 2549]

On the other hand, what would happen if we never enabled interrupts in scheduler() and kept spinning with no processes to run?

Why does curproc() [1789], which "cp" is a macro for, also disable interrupts, unrelated to any lock? It doesn't seem like it modifies anything..

How does the scheduler avoid picking the same process that just called yield (if there's another process that can run)?

scheduler() has its own stack (in cpus[x]). Why? Why not just call it directly from sched()? I.e. why not have scheduler() run on the stack of the last process that gave up the CPU?

What policy does scheduler() use to give CPU time to processes?

We know scheduler() is going to run the other cpu-bound process, by calling setupsegs(p) and swtch() to restore other process's registers and stack. Why does it set state to RUNNING? (That is, what's the functional difference between RUNNABLE and RUNNING?)

Do we need to call setupsegs in scheduler? Can we call it just before returning to user-space instead? What does setupsegs do exactly (set stack-top, user segments) [1672]?

Break in swtch() again, and check that we are indeed switching to the other process's stack, and step through to the return from its call to yield() from trap().

Break in swtch() a few times to see that we're bouncing between two processes (based on their context address values).

How do we garbage-collect processes in xv6? Can't deallocate the stack we're running on right now. When does a process get deallocated? Back to process state diagram, ZOMBIE state. [exit 2004, wait 2053] Why does exit pass children to init, rather than to its own parent?

How do we kill a process? Why not turn it into a ZOMBIE directly? [kill 1976]

Wrap-up

More sophisticated operating systems usually allow multiple threads of control in a process. How could we do this for xv6? Each thread would have its own user-space stack in the process address space. There would be a few more system calls: perhaps thread_fork(), thread_exit(), and thread_wait(). The kernel would have to split struct proc into a structure describing overall process resources (address space and open files), and a kernel stack and set of saved registers per thread. That way multiple threads from a process could be executing system calls inside the kernel.

How would a context switch between threads differ from a context switch between processes?

Added complication with kernel threads: concurrent syscalls?

We could also implement threads in user-space (maybe even pre-emptive with a periodic interrupt like SIGALRM). What's the difference? What do we gain from kernel threads?

It's worth comparing xv6's fairly conventional context switches with those of JOS. JOS has only one thread active in the kernel at a time, and thus only a single kernel stack (not one per process) and no thread switching inside the kernel. It only has an analogue to xv6's kernel entry/exit to save/restore user thread state. But as a direct consequence JOS 1) can't run the kernel concurrently on multiple CPUs and 2) cannot have system calls that block in the middle (e.g. to look up a multi-component path name).

What would look different in JOS's implementation of scheduling, interrupts? What's the equivalent of swtch()?

We've seen the complete story of how xv6 switches between user processes. This is the concrete realization of the virtual CPU abstraction. The core mechanism is swtch(), which saves and restores thread register state and thus switches stacks (ESP) and changes the control flow (EIP). We also saw a bit of what it takes to integrate thread switching with the concurrency brought by multiple CPUs and by interrupts.