Lab: locks

In this lab you will try to avoid lock contention for certain workloads that stress the memory allocator and the buffer cache.

Start a new branch for this lab, forking from your last branch.

  $ git checkout -b lock

Memory allocator

The program user/kalloctest stresses xv6's memory allocator: three processes grow and shrink there address space, which will results in many calls to kalloc and kfree, respectively. kalloc and kfree obtain kmem.lock. To see if there is lock contention for kmem.lock run kalloctest. You will see output with the number of test-and-sets over 100,000:

$ kalloctest
start test0
test0 done: #test-and-sets = 180785
start test1
usertrap(): unexpected scause 0x000000000000000f pid=6
usertrap(): unexpected scause 0x000000000000000f pid=7
            sepc=0x00000000000001aa stval=0x0000000003f70004
            sepc=0x00000000000001aa stval=0x0000000003f6d004
total allocated number of pages: 32471 (out of 32768)
test1 done
$  

Each time acquire tries to obtain a lock in kernel/spinlock.c, it increments a variable to count the number of test-and-sets. kalloctest calls a system call to retrieve that count. If there is lock contention the number of test-and-sets will be high because it takes many loop iterations before acquire obtains the lock.

The root cause of lock contention in kalloctest is that there is a single free list, protected by a single lock. To remove lock contention, you will have to redesign the memory allocator to avoid a single lock and list. The basic idea is to maintain a free list per CPU, each list with its own lock. Allocations and frees on each CPU can run in parallel, because each CPU will operate on a different list. The main challenge will be to deal with the case that one CPU runs out of memory, but another CPU has still free memory; in that case, the one CPU must "steal" part of the other CPU's free list. Stealing may introduce lock contention, but that may be acceptable because it may happen infrequently.

Your job is to implement per-CPU freelists and stealing when one CPU is out of memory. Run kalloctest() to see if your implementation has removed lock contention and can still allocate all of free memory. Your output will look similar to below, although the specific numbers will differ. Make sure usertests still passes.

$ kalloctest
start test0
test0 done: #test-and-sets = 45862
start test1
usertrap(): unexpected scause 0x000000000000000f pid=7
usertrap(): unexpected scause 0x000000000000000f pid=6
            sepc=0x00000000000001aa stval=0x0000000003f71004
            sepc=0x00000000000001aa stval=0x0000000003f6b004
total allocated number of pages: 32470 (out of 32768)
test1 done
$ usertests
...
ALL TESTS PASSED
$

Some hints:

Buffer cache

Several processes reading different files repeatedly will bottleneck in the buffer cache, bcache, in bio.c. Run bcachetest and you will see output like this with the test-and-set count larger than 15,000,000:

$ bcachetest
start test0 
test0 done: #test-and-sets: 17608674 
start test1
test1 done
$

Modify bget so that a lookup for a buffer that is in the bcache doesn't need to acquire bcache.lock. This is more tricky than the kalloc assignment, because bcache buffers are truly shared among processes. You must maintain the invariant that a buffer is only once in memory. When you are done, you may see output as below. Make sure usertests still passes.

$ bcachetest
start test0
test0 done: #test-and-sets: 9142528
start test1
test1 done
$ usertests
  ...
ALL TESTS PASSED
$

There are several races that bcache.lock protects against, including:

A challenge is testing whether you code is still correct. One way to do is to artificially delay certain operations using sleepticks. test1 trashes the buffer cache and exercises more code paths.

Here are some hints:

Check that your implementation has less contention on test0

Optional:

This completes the lab. Commit your changes and type make handin in the lab directory to hand in your lab.